00:00:00.001 Started by upstream project "autotest-per-patch" build number 132087 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.062 Fetching changes from the remote Git repository 00:00:00.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.104 Using shallow fetch with depth 1 00:00:00.104 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.104 > git --version # timeout=10 00:00:00.138 > git --version # 'git version 2.39.2' 00:00:00.138 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.170 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.170 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.091 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.102 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.113 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.113 > git config core.sparsecheckout # timeout=10 00:00:05.124 > git read-tree -mu HEAD # timeout=10 00:00:05.139 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.158 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.158 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.261 [Pipeline] Start of Pipeline 00:00:05.273 [Pipeline] library 00:00:05.275 Loading library shm_lib@master 00:00:05.275 Library shm_lib@master is cached. Copying from home. 00:00:05.289 [Pipeline] node 00:00:05.307 Running on WFP6 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.308 [Pipeline] { 00:00:05.317 [Pipeline] catchError 00:00:05.318 [Pipeline] { 00:00:05.328 [Pipeline] wrap 00:00:05.336 [Pipeline] { 00:00:05.346 [Pipeline] stage 00:00:05.349 [Pipeline] { (Prologue) 00:00:05.554 [Pipeline] sh 00:00:06.293 + logger -p user.info -t JENKINS-CI 00:00:06.311 [Pipeline] echo 00:00:06.312 Node: WFP6 00:00:06.319 [Pipeline] sh 00:00:06.647 [Pipeline] setCustomBuildProperty 00:00:06.658 [Pipeline] echo 00:00:06.660 Cleanup processes 00:00:06.665 [Pipeline] sh 00:00:06.953 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.953 194720 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.966 [Pipeline] sh 00:00:07.255 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.255 ++ grep -v 'sudo pgrep' 00:00:07.255 ++ awk '{print $1}' 00:00:07.255 + sudo kill -9 00:00:07.255 + true 00:00:07.273 [Pipeline] cleanWs 00:00:07.284 [WS-CLEANUP] Deleting project workspace... 00:00:07.284 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.295 [WS-CLEANUP] done 00:00:07.299 [Pipeline] setCustomBuildProperty 00:00:07.315 [Pipeline] sh 00:00:07.601 + sudo git config --global --replace-all safe.directory '*' 00:00:07.670 [Pipeline] httpRequest 00:00:09.358 [Pipeline] echo 00:00:09.359 Sorcerer 10.211.164.101 is alive 00:00:09.370 [Pipeline] retry 00:00:09.373 [Pipeline] { 00:00:09.388 [Pipeline] httpRequest 00:00:09.392 HttpMethod: GET 00:00:09.393 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.394 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.420 Response Code: HTTP/1.1 200 OK 00:00:09.420 Success: Status code 200 is in the accepted range: 200,404 00:00:09.421 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:33.340 [Pipeline] } 00:00:33.357 [Pipeline] // retry 00:00:33.365 [Pipeline] sh 00:00:33.656 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:33.674 [Pipeline] httpRequest 00:00:34.074 [Pipeline] echo 00:00:34.076 Sorcerer 10.211.164.101 is alive 00:00:34.086 [Pipeline] retry 00:00:34.088 [Pipeline] { 00:00:34.102 [Pipeline] httpRequest 00:00:34.110 HttpMethod: GET 00:00:34.110 URL: http://10.211.164.101/packages/spdk_ca5713c3836dd279778f1b6ac88aa8a5ae3a7968.tar.gz 00:00:34.122 Sending request to url: http://10.211.164.101/packages/spdk_ca5713c3836dd279778f1b6ac88aa8a5ae3a7968.tar.gz 00:00:34.136 Response Code: HTTP/1.1 200 OK 00:00:34.137 Success: Status code 200 is in the accepted range: 200,404 00:00:34.138 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_ca5713c3836dd279778f1b6ac88aa8a5ae3a7968.tar.gz 00:04:26.622 [Pipeline] } 00:04:26.640 [Pipeline] // retry 00:04:26.648 [Pipeline] sh 00:04:26.945 + tar --no-same-owner -xf spdk_ca5713c3836dd279778f1b6ac88aa8a5ae3a7968.tar.gz 00:04:29.499 [Pipeline] sh 00:04:29.787 + git -C spdk log --oneline -n5 00:04:29.787 ca5713c38 bdev/malloc: Support accel sequence when DIF is enabled 00:04:29.787 18e36da1a bdev/malloc: malloc_done() uses switch-case for clean up 00:04:29.787 481542548 accel: Add spdk_accel_sequence_has_task() to query what sequence does 00:04:29.787 a4d8602f2 nvmf: Add no_metadata option to nvmf_subsystem_add_ns 00:04:29.787 15b283ee8 nvmf: Get metadata config by not bdev but bdev_desc 00:04:29.798 [Pipeline] } 00:04:29.810 [Pipeline] // stage 00:04:29.819 [Pipeline] stage 00:04:29.821 [Pipeline] { (Prepare) 00:04:29.838 [Pipeline] writeFile 00:04:29.853 [Pipeline] sh 00:04:30.139 + logger -p user.info -t JENKINS-CI 00:04:30.153 [Pipeline] sh 00:04:30.441 + logger -p user.info -t JENKINS-CI 00:04:30.454 [Pipeline] sh 00:04:30.739 + cat autorun-spdk.conf 00:04:30.739 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:30.739 SPDK_TEST_NVMF=1 00:04:30.739 SPDK_TEST_NVME_CLI=1 00:04:30.739 SPDK_TEST_NVMF_NICS=mlx5 00:04:30.739 SPDK_RUN_UBSAN=1 00:04:30.739 NET_TYPE=phy 00:04:30.747 RUN_NIGHTLY=0 00:04:30.751 [Pipeline] readFile 00:04:30.791 [Pipeline] withEnv 00:04:30.793 [Pipeline] { 00:04:30.805 [Pipeline] sh 00:04:31.092 + set -ex 00:04:31.092 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:04:31.092 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:04:31.092 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:31.092 ++ SPDK_TEST_NVMF=1 00:04:31.092 ++ SPDK_TEST_NVME_CLI=1 00:04:31.092 ++ SPDK_TEST_NVMF_NICS=mlx5 00:04:31.092 ++ SPDK_RUN_UBSAN=1 00:04:31.092 ++ NET_TYPE=phy 00:04:31.092 ++ RUN_NIGHTLY=0 00:04:31.092 + case $SPDK_TEST_NVMF_NICS in 00:04:31.092 + DRIVERS=mlx5_ib 00:04:31.092 + [[ -n mlx5_ib ]] 00:04:31.092 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:31.092 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:37.667 rmmod: ERROR: Module irdma is not currently loaded 00:04:37.667 rmmod: ERROR: Module i40iw is not currently loaded 00:04:37.667 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:37.667 + true 00:04:37.667 + for D in $DRIVERS 00:04:37.667 + sudo modprobe mlx5_ib 00:04:37.667 + exit 0 00:04:37.677 [Pipeline] } 00:04:37.691 [Pipeline] // withEnv 00:04:37.697 [Pipeline] } 00:04:37.710 [Pipeline] // stage 00:04:37.720 [Pipeline] catchError 00:04:37.722 [Pipeline] { 00:04:37.734 [Pipeline] timeout 00:04:37.735 Timeout set to expire in 1 hr 0 min 00:04:37.736 [Pipeline] { 00:04:37.750 [Pipeline] stage 00:04:37.752 [Pipeline] { (Tests) 00:04:37.766 [Pipeline] sh 00:04:38.054 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:04:38.054 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:04:38.054 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:04:38.054 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:04:38.054 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:38.054 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:04:38.054 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:04:38.054 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:04:38.054 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:04:38.054 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:04:38.054 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:04:38.054 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:04:38.054 + source /etc/os-release 00:04:38.054 ++ NAME='Fedora Linux' 00:04:38.054 ++ VERSION='39 (Cloud Edition)' 00:04:38.054 ++ ID=fedora 00:04:38.054 ++ VERSION_ID=39 00:04:38.054 ++ VERSION_CODENAME= 00:04:38.054 ++ PLATFORM_ID=platform:f39 00:04:38.054 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:38.054 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:38.054 ++ LOGO=fedora-logo-icon 00:04:38.054 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:38.054 ++ HOME_URL=https://fedoraproject.org/ 00:04:38.054 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:38.054 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:38.054 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:38.054 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:38.054 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:38.054 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:38.054 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:38.054 ++ SUPPORT_END=2024-11-12 00:04:38.054 ++ VARIANT='Cloud Edition' 00:04:38.054 ++ VARIANT_ID=cloud 00:04:38.054 + uname -a 00:04:38.054 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:38.054 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:40.594 Hugepages 00:04:40.594 node hugesize free / total 00:04:40.594 node0 1048576kB 0 / 0 00:04:40.594 node0 2048kB 0 / 0 00:04:40.594 node1 1048576kB 0 / 0 00:04:40.594 node1 2048kB 0 / 0 00:04:40.594 00:04:40.594 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.594 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:40.594 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:40.594 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:40.594 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:40.594 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:40.595 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:40.595 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:40.595 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:40.595 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:40.595 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:40.595 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:40.595 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:40.595 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:40.595 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:40.595 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:40.595 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:40.595 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:40.595 + rm -f /tmp/spdk-ld-path 00:04:40.595 + source autorun-spdk.conf 00:04:40.595 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:40.595 ++ SPDK_TEST_NVMF=1 00:04:40.595 ++ SPDK_TEST_NVME_CLI=1 00:04:40.595 ++ SPDK_TEST_NVMF_NICS=mlx5 00:04:40.595 ++ SPDK_RUN_UBSAN=1 00:04:40.595 ++ NET_TYPE=phy 00:04:40.595 ++ RUN_NIGHTLY=0 00:04:40.595 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:40.595 + [[ -n '' ]] 00:04:40.595 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:40.595 + for M in /var/spdk/build-*-manifest.txt 00:04:40.595 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:40.595 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:04:40.595 + for M in /var/spdk/build-*-manifest.txt 00:04:40.595 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:40.595 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:04:40.595 + for M in /var/spdk/build-*-manifest.txt 00:04:40.595 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:40.595 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:04:40.595 ++ uname 00:04:40.595 + [[ Linux == \L\i\n\u\x ]] 00:04:40.595 + sudo dmesg -T 00:04:40.595 + sudo dmesg --clear 00:04:40.595 + dmesg_pid=196192 00:04:40.595 + [[ Fedora Linux == FreeBSD ]] 00:04:40.595 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:40.595 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:40.595 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:40.595 + [[ -x /usr/src/fio-static/fio ]] 00:04:40.595 + export FIO_BIN=/usr/src/fio-static/fio 00:04:40.595 + FIO_BIN=/usr/src/fio-static/fio 00:04:40.595 + sudo dmesg -Tw 00:04:40.595 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:40.595 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:40.595 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:40.595 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:40.595 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:40.595 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:40.595 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:40.595 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:40.595 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:04:40.595 Test configuration: 00:04:40.595 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:40.595 SPDK_TEST_NVMF=1 00:04:40.595 SPDK_TEST_NVME_CLI=1 00:04:40.595 SPDK_TEST_NVMF_NICS=mlx5 00:04:40.595 SPDK_RUN_UBSAN=1 00:04:40.595 NET_TYPE=phy 00:04:40.855 RUN_NIGHTLY=0 08:41:03 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:04:40.855 08:41:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:40.855 08:41:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:40.855 08:41:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:40.855 08:41:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.855 08:41:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.855 08:41:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.855 08:41:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.855 08:41:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.855 08:41:03 -- paths/export.sh@5 -- $ export PATH 00:04:40.855 08:41:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.855 08:41:03 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:04:40.855 08:41:03 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:40.855 08:41:03 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730878863.XXXXXX 00:04:40.855 08:41:03 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730878863.G9n45l 00:04:40.855 08:41:03 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:40.855 08:41:03 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:40.855 08:41:03 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:04:40.855 08:41:03 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:40.855 08:41:03 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:40.855 08:41:03 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:40.855 08:41:03 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:40.855 08:41:03 -- common/autotest_common.sh@10 -- $ set +x 00:04:40.855 08:41:03 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:04:40.855 08:41:03 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:40.855 08:41:03 -- pm/common@17 -- $ local monitor 00:04:40.855 08:41:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.855 08:41:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.855 08:41:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.855 08:41:03 -- pm/common@21 -- $ date +%s 00:04:40.855 08:41:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.855 08:41:03 -- pm/common@21 -- $ date +%s 00:04:40.855 08:41:03 -- pm/common@25 -- $ sleep 1 00:04:40.855 08:41:03 -- pm/common@21 -- $ date +%s 00:04:40.855 08:41:03 -- pm/common@21 -- $ date +%s 00:04:40.855 08:41:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730878863 00:04:40.855 08:41:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730878863 00:04:40.855 08:41:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730878863 00:04:40.855 08:41:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730878863 00:04:40.855 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730878863_collect-cpu-temp.pm.log 00:04:40.855 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730878863_collect-cpu-load.pm.log 00:04:40.855 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730878863_collect-vmstat.pm.log 00:04:40.855 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730878863_collect-bmc-pm.bmc.pm.log 00:04:41.819 08:41:04 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:41.819 08:41:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:41.819 08:41:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:41.819 08:41:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:41.819 08:41:04 -- spdk/autobuild.sh@16 -- $ date -u 00:04:41.819 Wed Nov 6 07:41:04 AM UTC 2024 00:04:41.819 08:41:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:41.819 v25.01-pre-144-gca5713c38 00:04:41.819 08:41:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:41.819 08:41:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:41.819 08:41:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:41.819 08:41:04 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:41.819 08:41:04 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:41.819 08:41:04 -- common/autotest_common.sh@10 -- $ set +x 00:04:41.819 ************************************ 00:04:41.820 START TEST ubsan 00:04:41.820 ************************************ 00:04:41.820 08:41:04 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:04:41.820 using ubsan 00:04:41.820 00:04:41.820 real 0m0.000s 00:04:41.820 user 0m0.000s 00:04:41.820 sys 0m0.000s 00:04:41.820 08:41:04 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:41.820 08:41:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:41.820 ************************************ 00:04:41.820 END TEST ubsan 00:04:41.820 ************************************ 00:04:41.820 08:41:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:41.820 08:41:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:41.820 08:41:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:41.820 08:41:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:41.820 08:41:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:41.820 08:41:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:41.820 08:41:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:41.820 08:41:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:41.820 08:41:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:04:42.079 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:04:42.079 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:04:43.017 Using 'verbs' RDMA provider 00:04:58.844 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:11.065 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:11.065 Creating mk/config.mk...done. 00:05:11.065 Creating mk/cc.flags.mk...done. 00:05:11.065 Type 'make' to build. 00:05:11.065 08:41:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:05:11.065 08:41:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:11.065 08:41:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:11.065 08:41:33 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.065 ************************************ 00:05:11.065 START TEST make 00:05:11.065 ************************************ 00:05:11.065 08:41:33 make -- common/autotest_common.sh@1125 -- $ make -j96 00:05:11.065 make[1]: Nothing to be done for 'all'. 00:05:19.190 The Meson build system 00:05:19.190 Version: 1.5.0 00:05:19.190 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:05:19.190 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:05:19.190 Build type: native build 00:05:19.190 Program cat found: YES (/usr/bin/cat) 00:05:19.190 Project name: DPDK 00:05:19.190 Project version: 24.03.0 00:05:19.190 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:19.190 C linker for the host machine: cc ld.bfd 2.40-14 00:05:19.190 Host machine cpu family: x86_64 00:05:19.190 Host machine cpu: x86_64 00:05:19.190 Message: ## Building in Developer Mode ## 00:05:19.190 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:19.190 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:19.190 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:19.190 Program python3 found: YES (/usr/bin/python3) 00:05:19.190 Program cat found: YES (/usr/bin/cat) 00:05:19.190 Compiler for C supports arguments -march=native: YES 00:05:19.190 Checking for size of "void *" : 8 00:05:19.190 Checking for size of "void *" : 8 (cached) 00:05:19.190 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:19.190 Library m found: YES 00:05:19.190 Library numa found: YES 00:05:19.190 Has header "numaif.h" : YES 00:05:19.190 Library fdt found: NO 00:05:19.190 Library execinfo found: NO 00:05:19.190 Has header "execinfo.h" : YES 00:05:19.190 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:19.190 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:19.190 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:19.190 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:19.190 Run-time dependency openssl found: YES 3.1.1 00:05:19.190 Run-time dependency libpcap found: YES 1.10.4 00:05:19.190 Has header "pcap.h" with dependency libpcap: YES 00:05:19.190 Compiler for C supports arguments -Wcast-qual: YES 00:05:19.190 Compiler for C supports arguments -Wdeprecated: YES 00:05:19.190 Compiler for C supports arguments -Wformat: YES 00:05:19.190 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:19.190 Compiler for C supports arguments -Wformat-security: NO 00:05:19.190 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:19.190 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:19.190 Compiler for C supports arguments -Wnested-externs: YES 00:05:19.190 Compiler for C supports arguments -Wold-style-definition: YES 00:05:19.190 Compiler for C supports arguments -Wpointer-arith: YES 00:05:19.190 Compiler for C supports arguments -Wsign-compare: YES 00:05:19.190 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:19.190 Compiler for C supports arguments -Wundef: YES 00:05:19.190 Compiler for C supports arguments -Wwrite-strings: YES 00:05:19.190 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:19.190 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:19.190 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:19.190 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:19.190 Program objdump found: YES (/usr/bin/objdump) 00:05:19.190 Compiler for C supports arguments -mavx512f: YES 00:05:19.190 Checking if "AVX512 checking" compiles: YES 00:05:19.190 Fetching value of define "__SSE4_2__" : 1 00:05:19.190 Fetching value of define "__AES__" : 1 00:05:19.190 Fetching value of define "__AVX__" : 1 00:05:19.190 Fetching value of define "__AVX2__" : 1 00:05:19.190 Fetching value of define "__AVX512BW__" : 1 00:05:19.190 Fetching value of define "__AVX512CD__" : 1 00:05:19.190 Fetching value of define "__AVX512DQ__" : 1 00:05:19.190 Fetching value of define "__AVX512F__" : 1 00:05:19.190 Fetching value of define "__AVX512VL__" : 1 00:05:19.190 Fetching value of define "__PCLMUL__" : 1 00:05:19.190 Fetching value of define "__RDRND__" : 1 00:05:19.190 Fetching value of define "__RDSEED__" : 1 00:05:19.190 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:19.190 Fetching value of define "__znver1__" : (undefined) 00:05:19.190 Fetching value of define "__znver2__" : (undefined) 00:05:19.190 Fetching value of define "__znver3__" : (undefined) 00:05:19.190 Fetching value of define "__znver4__" : (undefined) 00:05:19.190 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:19.190 Message: lib/log: Defining dependency "log" 00:05:19.190 Message: lib/kvargs: Defining dependency "kvargs" 00:05:19.190 Message: lib/telemetry: Defining dependency "telemetry" 00:05:19.190 Checking for function "getentropy" : NO 00:05:19.190 Message: lib/eal: Defining dependency "eal" 00:05:19.190 Message: lib/ring: Defining dependency "ring" 00:05:19.190 Message: lib/rcu: Defining dependency "rcu" 00:05:19.190 Message: lib/mempool: Defining dependency "mempool" 00:05:19.190 Message: lib/mbuf: Defining dependency "mbuf" 00:05:19.190 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:19.190 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:19.190 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:19.190 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:19.190 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:19.190 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:19.190 Compiler for C supports arguments -mpclmul: YES 00:05:19.190 Compiler for C supports arguments -maes: YES 00:05:19.190 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:19.190 Compiler for C supports arguments -mavx512bw: YES 00:05:19.190 Compiler for C supports arguments -mavx512dq: YES 00:05:19.190 Compiler for C supports arguments -mavx512vl: YES 00:05:19.190 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:19.190 Compiler for C supports arguments -mavx2: YES 00:05:19.190 Compiler for C supports arguments -mavx: YES 00:05:19.190 Message: lib/net: Defining dependency "net" 00:05:19.190 Message: lib/meter: Defining dependency "meter" 00:05:19.190 Message: lib/ethdev: Defining dependency "ethdev" 00:05:19.190 Message: lib/pci: Defining dependency "pci" 00:05:19.190 Message: lib/cmdline: Defining dependency "cmdline" 00:05:19.190 Message: lib/hash: Defining dependency "hash" 00:05:19.190 Message: lib/timer: Defining dependency "timer" 00:05:19.190 Message: lib/compressdev: Defining dependency "compressdev" 00:05:19.190 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:19.190 Message: lib/dmadev: Defining dependency "dmadev" 00:05:19.190 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:19.190 Message: lib/power: Defining dependency "power" 00:05:19.190 Message: lib/reorder: Defining dependency "reorder" 00:05:19.190 Message: lib/security: Defining dependency "security" 00:05:19.190 Has header "linux/userfaultfd.h" : YES 00:05:19.190 Has header "linux/vduse.h" : YES 00:05:19.190 Message: lib/vhost: Defining dependency "vhost" 00:05:19.190 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:19.190 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:19.190 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:19.190 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:19.190 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:19.190 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:19.190 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:19.190 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:19.190 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:19.190 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:19.190 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:19.190 Configuring doxy-api-html.conf using configuration 00:05:19.190 Configuring doxy-api-man.conf using configuration 00:05:19.190 Program mandb found: YES (/usr/bin/mandb) 00:05:19.190 Program sphinx-build found: NO 00:05:19.190 Configuring rte_build_config.h using configuration 00:05:19.190 Message: 00:05:19.190 ================= 00:05:19.190 Applications Enabled 00:05:19.190 ================= 00:05:19.190 00:05:19.190 apps: 00:05:19.190 00:05:19.190 00:05:19.190 Message: 00:05:19.190 ================= 00:05:19.190 Libraries Enabled 00:05:19.190 ================= 00:05:19.190 00:05:19.190 libs: 00:05:19.190 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:19.190 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:19.190 cryptodev, dmadev, power, reorder, security, vhost, 00:05:19.190 00:05:19.190 Message: 00:05:19.191 =============== 00:05:19.191 Drivers Enabled 00:05:19.191 =============== 00:05:19.191 00:05:19.191 common: 00:05:19.191 00:05:19.191 bus: 00:05:19.191 pci, vdev, 00:05:19.191 mempool: 00:05:19.191 ring, 00:05:19.191 dma: 00:05:19.191 00:05:19.191 net: 00:05:19.191 00:05:19.191 crypto: 00:05:19.191 00:05:19.191 compress: 00:05:19.191 00:05:19.191 vdpa: 00:05:19.191 00:05:19.191 00:05:19.191 Message: 00:05:19.191 ================= 00:05:19.191 Content Skipped 00:05:19.191 ================= 00:05:19.191 00:05:19.191 apps: 00:05:19.191 dumpcap: explicitly disabled via build config 00:05:19.191 graph: explicitly disabled via build config 00:05:19.191 pdump: explicitly disabled via build config 00:05:19.191 proc-info: explicitly disabled via build config 00:05:19.191 test-acl: explicitly disabled via build config 00:05:19.191 test-bbdev: explicitly disabled via build config 00:05:19.191 test-cmdline: explicitly disabled via build config 00:05:19.191 test-compress-perf: explicitly disabled via build config 00:05:19.191 test-crypto-perf: explicitly disabled via build config 00:05:19.191 test-dma-perf: explicitly disabled via build config 00:05:19.191 test-eventdev: explicitly disabled via build config 00:05:19.191 test-fib: explicitly disabled via build config 00:05:19.191 test-flow-perf: explicitly disabled via build config 00:05:19.191 test-gpudev: explicitly disabled via build config 00:05:19.191 test-mldev: explicitly disabled via build config 00:05:19.191 test-pipeline: explicitly disabled via build config 00:05:19.191 test-pmd: explicitly disabled via build config 00:05:19.191 test-regex: explicitly disabled via build config 00:05:19.191 test-sad: explicitly disabled via build config 00:05:19.191 test-security-perf: explicitly disabled via build config 00:05:19.191 00:05:19.191 libs: 00:05:19.191 argparse: explicitly disabled via build config 00:05:19.191 metrics: explicitly disabled via build config 00:05:19.191 acl: explicitly disabled via build config 00:05:19.191 bbdev: explicitly disabled via build config 00:05:19.191 bitratestats: explicitly disabled via build config 00:05:19.191 bpf: explicitly disabled via build config 00:05:19.191 cfgfile: explicitly disabled via build config 00:05:19.191 distributor: explicitly disabled via build config 00:05:19.191 efd: explicitly disabled via build config 00:05:19.191 eventdev: explicitly disabled via build config 00:05:19.191 dispatcher: explicitly disabled via build config 00:05:19.191 gpudev: explicitly disabled via build config 00:05:19.191 gro: explicitly disabled via build config 00:05:19.191 gso: explicitly disabled via build config 00:05:19.191 ip_frag: explicitly disabled via build config 00:05:19.191 jobstats: explicitly disabled via build config 00:05:19.191 latencystats: explicitly disabled via build config 00:05:19.191 lpm: explicitly disabled via build config 00:05:19.191 member: explicitly disabled via build config 00:05:19.191 pcapng: explicitly disabled via build config 00:05:19.191 rawdev: explicitly disabled via build config 00:05:19.191 regexdev: explicitly disabled via build config 00:05:19.191 mldev: explicitly disabled via build config 00:05:19.191 rib: explicitly disabled via build config 00:05:19.191 sched: explicitly disabled via build config 00:05:19.191 stack: explicitly disabled via build config 00:05:19.191 ipsec: explicitly disabled via build config 00:05:19.191 pdcp: explicitly disabled via build config 00:05:19.191 fib: explicitly disabled via build config 00:05:19.191 port: explicitly disabled via build config 00:05:19.191 pdump: explicitly disabled via build config 00:05:19.191 table: explicitly disabled via build config 00:05:19.191 pipeline: explicitly disabled via build config 00:05:19.191 graph: explicitly disabled via build config 00:05:19.191 node: explicitly disabled via build config 00:05:19.191 00:05:19.191 drivers: 00:05:19.191 common/cpt: not in enabled drivers build config 00:05:19.191 common/dpaax: not in enabled drivers build config 00:05:19.191 common/iavf: not in enabled drivers build config 00:05:19.191 common/idpf: not in enabled drivers build config 00:05:19.191 common/ionic: not in enabled drivers build config 00:05:19.191 common/mvep: not in enabled drivers build config 00:05:19.191 common/octeontx: not in enabled drivers build config 00:05:19.191 bus/auxiliary: not in enabled drivers build config 00:05:19.191 bus/cdx: not in enabled drivers build config 00:05:19.191 bus/dpaa: not in enabled drivers build config 00:05:19.191 bus/fslmc: not in enabled drivers build config 00:05:19.191 bus/ifpga: not in enabled drivers build config 00:05:19.191 bus/platform: not in enabled drivers build config 00:05:19.191 bus/uacce: not in enabled drivers build config 00:05:19.191 bus/vmbus: not in enabled drivers build config 00:05:19.191 common/cnxk: not in enabled drivers build config 00:05:19.191 common/mlx5: not in enabled drivers build config 00:05:19.191 common/nfp: not in enabled drivers build config 00:05:19.191 common/nitrox: not in enabled drivers build config 00:05:19.191 common/qat: not in enabled drivers build config 00:05:19.191 common/sfc_efx: not in enabled drivers build config 00:05:19.191 mempool/bucket: not in enabled drivers build config 00:05:19.191 mempool/cnxk: not in enabled drivers build config 00:05:19.191 mempool/dpaa: not in enabled drivers build config 00:05:19.191 mempool/dpaa2: not in enabled drivers build config 00:05:19.191 mempool/octeontx: not in enabled drivers build config 00:05:19.191 mempool/stack: not in enabled drivers build config 00:05:19.191 dma/cnxk: not in enabled drivers build config 00:05:19.191 dma/dpaa: not in enabled drivers build config 00:05:19.191 dma/dpaa2: not in enabled drivers build config 00:05:19.191 dma/hisilicon: not in enabled drivers build config 00:05:19.191 dma/idxd: not in enabled drivers build config 00:05:19.191 dma/ioat: not in enabled drivers build config 00:05:19.191 dma/skeleton: not in enabled drivers build config 00:05:19.191 net/af_packet: not in enabled drivers build config 00:05:19.191 net/af_xdp: not in enabled drivers build config 00:05:19.191 net/ark: not in enabled drivers build config 00:05:19.191 net/atlantic: not in enabled drivers build config 00:05:19.191 net/avp: not in enabled drivers build config 00:05:19.191 net/axgbe: not in enabled drivers build config 00:05:19.191 net/bnx2x: not in enabled drivers build config 00:05:19.191 net/bnxt: not in enabled drivers build config 00:05:19.191 net/bonding: not in enabled drivers build config 00:05:19.191 net/cnxk: not in enabled drivers build config 00:05:19.191 net/cpfl: not in enabled drivers build config 00:05:19.191 net/cxgbe: not in enabled drivers build config 00:05:19.191 net/dpaa: not in enabled drivers build config 00:05:19.191 net/dpaa2: not in enabled drivers build config 00:05:19.191 net/e1000: not in enabled drivers build config 00:05:19.191 net/ena: not in enabled drivers build config 00:05:19.191 net/enetc: not in enabled drivers build config 00:05:19.191 net/enetfec: not in enabled drivers build config 00:05:19.191 net/enic: not in enabled drivers build config 00:05:19.191 net/failsafe: not in enabled drivers build config 00:05:19.191 net/fm10k: not in enabled drivers build config 00:05:19.191 net/gve: not in enabled drivers build config 00:05:19.191 net/hinic: not in enabled drivers build config 00:05:19.191 net/hns3: not in enabled drivers build config 00:05:19.191 net/i40e: not in enabled drivers build config 00:05:19.191 net/iavf: not in enabled drivers build config 00:05:19.191 net/ice: not in enabled drivers build config 00:05:19.191 net/idpf: not in enabled drivers build config 00:05:19.191 net/igc: not in enabled drivers build config 00:05:19.191 net/ionic: not in enabled drivers build config 00:05:19.191 net/ipn3ke: not in enabled drivers build config 00:05:19.191 net/ixgbe: not in enabled drivers build config 00:05:19.191 net/mana: not in enabled drivers build config 00:05:19.191 net/memif: not in enabled drivers build config 00:05:19.191 net/mlx4: not in enabled drivers build config 00:05:19.191 net/mlx5: not in enabled drivers build config 00:05:19.191 net/mvneta: not in enabled drivers build config 00:05:19.191 net/mvpp2: not in enabled drivers build config 00:05:19.191 net/netvsc: not in enabled drivers build config 00:05:19.191 net/nfb: not in enabled drivers build config 00:05:19.191 net/nfp: not in enabled drivers build config 00:05:19.191 net/ngbe: not in enabled drivers build config 00:05:19.191 net/null: not in enabled drivers build config 00:05:19.191 net/octeontx: not in enabled drivers build config 00:05:19.191 net/octeon_ep: not in enabled drivers build config 00:05:19.191 net/pcap: not in enabled drivers build config 00:05:19.191 net/pfe: not in enabled drivers build config 00:05:19.191 net/qede: not in enabled drivers build config 00:05:19.191 net/ring: not in enabled drivers build config 00:05:19.191 net/sfc: not in enabled drivers build config 00:05:19.191 net/softnic: not in enabled drivers build config 00:05:19.191 net/tap: not in enabled drivers build config 00:05:19.191 net/thunderx: not in enabled drivers build config 00:05:19.191 net/txgbe: not in enabled drivers build config 00:05:19.191 net/vdev_netvsc: not in enabled drivers build config 00:05:19.191 net/vhost: not in enabled drivers build config 00:05:19.191 net/virtio: not in enabled drivers build config 00:05:19.191 net/vmxnet3: not in enabled drivers build config 00:05:19.191 raw/*: missing internal dependency, "rawdev" 00:05:19.191 crypto/armv8: not in enabled drivers build config 00:05:19.191 crypto/bcmfs: not in enabled drivers build config 00:05:19.191 crypto/caam_jr: not in enabled drivers build config 00:05:19.191 crypto/ccp: not in enabled drivers build config 00:05:19.191 crypto/cnxk: not in enabled drivers build config 00:05:19.191 crypto/dpaa_sec: not in enabled drivers build config 00:05:19.191 crypto/dpaa2_sec: not in enabled drivers build config 00:05:19.191 crypto/ipsec_mb: not in enabled drivers build config 00:05:19.191 crypto/mlx5: not in enabled drivers build config 00:05:19.191 crypto/mvsam: not in enabled drivers build config 00:05:19.191 crypto/nitrox: not in enabled drivers build config 00:05:19.191 crypto/null: not in enabled drivers build config 00:05:19.191 crypto/octeontx: not in enabled drivers build config 00:05:19.191 crypto/openssl: not in enabled drivers build config 00:05:19.191 crypto/scheduler: not in enabled drivers build config 00:05:19.191 crypto/uadk: not in enabled drivers build config 00:05:19.191 crypto/virtio: not in enabled drivers build config 00:05:19.191 compress/isal: not in enabled drivers build config 00:05:19.191 compress/mlx5: not in enabled drivers build config 00:05:19.191 compress/nitrox: not in enabled drivers build config 00:05:19.192 compress/octeontx: not in enabled drivers build config 00:05:19.192 compress/zlib: not in enabled drivers build config 00:05:19.192 regex/*: missing internal dependency, "regexdev" 00:05:19.192 ml/*: missing internal dependency, "mldev" 00:05:19.192 vdpa/ifc: not in enabled drivers build config 00:05:19.192 vdpa/mlx5: not in enabled drivers build config 00:05:19.192 vdpa/nfp: not in enabled drivers build config 00:05:19.192 vdpa/sfc: not in enabled drivers build config 00:05:19.192 event/*: missing internal dependency, "eventdev" 00:05:19.192 baseband/*: missing internal dependency, "bbdev" 00:05:19.192 gpu/*: missing internal dependency, "gpudev" 00:05:19.192 00:05:19.192 00:05:19.192 Build targets in project: 85 00:05:19.192 00:05:19.192 DPDK 24.03.0 00:05:19.192 00:05:19.192 User defined options 00:05:19.192 buildtype : debug 00:05:19.192 default_library : shared 00:05:19.192 libdir : lib 00:05:19.192 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:05:19.192 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:19.192 c_link_args : 00:05:19.192 cpu_instruction_set: native 00:05:19.192 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:05:19.192 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:05:19.192 enable_docs : false 00:05:19.192 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:19.192 enable_kmods : false 00:05:19.192 max_lcores : 128 00:05:19.192 tests : false 00:05:19.192 00:05:19.192 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:19.460 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:05:19.460 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:19.460 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:19.460 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:19.460 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:19.460 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:19.460 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:19.460 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:19.461 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:19.728 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:19.728 [10/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:19.728 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:19.728 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:19.728 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:19.728 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:19.728 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:19.728 [16/268] Linking static target lib/librte_kvargs.a 00:05:19.728 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:19.728 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:19.729 [19/268] Linking static target lib/librte_log.a 00:05:19.729 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:19.729 [21/268] Linking static target lib/librte_pci.a 00:05:19.729 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:19.729 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:19.729 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:19.991 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:19.991 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:19.991 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:19.991 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:19.991 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:19.991 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:19.991 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:19.991 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:19.991 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:19.991 [34/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:19.991 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:19.991 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:19.991 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:19.991 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:19.991 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:19.991 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:19.991 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:19.991 [42/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:19.991 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:19.991 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:19.991 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:19.991 [46/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:19.991 [47/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:19.991 [48/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:19.991 [49/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:19.991 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:19.991 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:19.991 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:19.991 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:19.991 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:19.991 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:19.991 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:19.991 [57/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:19.991 [58/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:20.253 [59/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:20.253 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:20.253 [61/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:20.253 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:20.253 [63/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:20.253 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:20.253 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:20.253 [66/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:20.253 [67/268] Linking static target lib/librte_ring.a 00:05:20.253 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:20.253 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:20.253 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:20.253 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:20.253 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:20.253 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:20.253 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:20.253 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:20.253 [76/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:20.253 [77/268] Linking static target lib/librte_meter.a 00:05:20.253 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:20.253 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:20.253 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:20.253 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:20.253 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:20.253 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:20.253 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:20.253 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:20.253 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:20.253 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:20.253 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:20.253 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:20.253 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:20.253 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:20.253 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:20.253 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:20.253 [94/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:20.253 [95/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.253 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:20.253 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:20.253 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:20.253 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:20.253 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:20.253 [101/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.253 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:20.253 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:20.253 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:20.253 [105/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:20.253 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:20.253 [107/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:20.253 [108/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:20.253 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:20.253 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:20.253 [111/268] Linking static target lib/librte_telemetry.a 00:05:20.253 [112/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:20.253 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:20.253 [114/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:20.253 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:20.253 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:20.253 [117/268] Linking static target lib/librte_net.a 00:05:20.253 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:20.253 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:20.253 [120/268] Linking static target lib/librte_mempool.a 00:05:20.253 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:20.253 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:20.253 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:20.253 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:20.253 [125/268] Linking static target lib/librte_rcu.a 00:05:20.253 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:20.253 [127/268] Linking static target lib/librte_eal.a 00:05:20.253 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:20.253 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:20.253 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:20.512 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:20.512 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:20.512 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:20.512 [134/268] Linking static target lib/librte_cmdline.a 00:05:20.512 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.512 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.512 [137/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:20.512 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:20.512 [139/268] Linking static target lib/librte_mbuf.a 00:05:20.512 [140/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.512 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:20.512 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:20.512 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:20.512 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:20.512 [145/268] Linking target lib/librte_log.so.24.1 00:05:20.512 [146/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.512 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:20.512 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:20.512 [149/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:20.512 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:20.512 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:20.512 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:20.512 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:20.512 [154/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:20.512 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:20.512 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:20.512 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:20.512 [158/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:20.512 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:20.512 [160/268] Linking static target lib/librte_timer.a 00:05:20.512 [161/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:20.771 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:20.771 [163/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.771 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:20.771 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:20.771 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:20.771 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:20.771 [168/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:20.771 [169/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:20.771 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:20.771 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:20.771 [172/268] Linking static target lib/librte_dmadev.a 00:05:20.771 [173/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.771 [174/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:20.771 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:20.771 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:20.771 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:20.771 [178/268] Linking static target lib/librte_power.a 00:05:20.771 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:20.771 [180/268] Linking static target lib/librte_compressdev.a 00:05:20.771 [181/268] Linking target lib/librte_kvargs.so.24.1 00:05:20.771 [182/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:20.771 [183/268] Linking target lib/librte_telemetry.so.24.1 00:05:20.771 [184/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:20.771 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:20.771 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:20.771 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:20.771 [188/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:20.771 [189/268] Linking static target lib/librte_hash.a 00:05:20.771 [190/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:20.771 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:20.771 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:20.771 [193/268] Linking static target lib/librte_reorder.a 00:05:20.771 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:20.771 [195/268] Linking static target lib/librte_security.a 00:05:20.771 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:20.771 [197/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:21.030 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:21.030 [199/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:21.030 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:21.030 [201/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:21.030 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:21.030 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:21.030 [204/268] Linking static target drivers/librte_bus_vdev.a 00:05:21.030 [205/268] Linking static target drivers/librte_mempool_ring.a 00:05:21.030 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:21.030 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:21.030 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:21.030 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:21.030 [210/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:21.030 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.030 [212/268] Linking static target drivers/librte_bus_pci.a 00:05:21.030 [213/268] Linking static target lib/librte_cryptodev.a 00:05:21.030 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.290 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.290 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:21.290 [217/268] Linking static target lib/librte_ethdev.a 00:05:21.290 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.290 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.290 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.549 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.549 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.549 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.549 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:21.549 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.808 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.808 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.746 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:22.746 [229/268] Linking static target lib/librte_vhost.a 00:05:23.005 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.385 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.655 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.223 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.223 [234/268] Linking target lib/librte_eal.so.24.1 00:05:30.223 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:30.223 [236/268] Linking target lib/librte_ring.so.24.1 00:05:30.223 [237/268] Linking target lib/librte_meter.so.24.1 00:05:30.223 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:30.223 [239/268] Linking target lib/librte_pci.so.24.1 00:05:30.223 [240/268] Linking target lib/librte_timer.so.24.1 00:05:30.223 [241/268] Linking target lib/librte_dmadev.so.24.1 00:05:30.482 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:30.482 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:30.482 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:30.482 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:30.482 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:30.482 [247/268] Linking target lib/librte_rcu.so.24.1 00:05:30.482 [248/268] Linking target lib/librte_mempool.so.24.1 00:05:30.483 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:30.742 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:30.742 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:30.742 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:30.742 [253/268] Linking target lib/librte_mbuf.so.24.1 00:05:30.742 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:31.001 [255/268] Linking target lib/librte_reorder.so.24.1 00:05:31.001 [256/268] Linking target lib/librte_compressdev.so.24.1 00:05:31.001 [257/268] Linking target lib/librte_net.so.24.1 00:05:31.001 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:05:31.001 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:31.001 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:31.001 [261/268] Linking target lib/librte_hash.so.24.1 00:05:31.001 [262/268] Linking target lib/librte_security.so.24.1 00:05:31.001 [263/268] Linking target lib/librte_cmdline.so.24.1 00:05:31.001 [264/268] Linking target lib/librte_ethdev.so.24.1 00:05:31.260 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:31.260 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:31.260 [267/268] Linking target lib/librte_power.so.24.1 00:05:31.260 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:31.260 INFO: autodetecting backend as ninja 00:05:31.260 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:05:43.471 CC lib/ut_mock/mock.o 00:05:43.471 CC lib/ut/ut.o 00:05:43.471 CC lib/log/log.o 00:05:43.471 CC lib/log/log_flags.o 00:05:43.471 CC lib/log/log_deprecated.o 00:05:43.471 LIB libspdk_ut.a 00:05:43.471 LIB libspdk_log.a 00:05:43.471 LIB libspdk_ut_mock.a 00:05:43.471 SO libspdk_ut.so.2.0 00:05:43.471 SO libspdk_ut_mock.so.6.0 00:05:43.471 SO libspdk_log.so.7.1 00:05:43.471 SYMLINK libspdk_ut.so 00:05:43.471 SYMLINK libspdk_ut_mock.so 00:05:43.471 SYMLINK libspdk_log.so 00:05:43.471 CC lib/dma/dma.o 00:05:43.471 CXX lib/trace_parser/trace.o 00:05:43.471 CC lib/ioat/ioat.o 00:05:43.471 CC lib/util/base64.o 00:05:43.471 CC lib/util/bit_array.o 00:05:43.471 CC lib/util/cpuset.o 00:05:43.471 CC lib/util/crc16.o 00:05:43.471 CC lib/util/crc32.o 00:05:43.471 CC lib/util/crc32c.o 00:05:43.471 CC lib/util/crc32_ieee.o 00:05:43.471 CC lib/util/crc64.o 00:05:43.471 CC lib/util/dif.o 00:05:43.471 CC lib/util/fd.o 00:05:43.471 CC lib/util/fd_group.o 00:05:43.471 CC lib/util/file.o 00:05:43.471 CC lib/util/hexlify.o 00:05:43.471 CC lib/util/iov.o 00:05:43.471 CC lib/util/math.o 00:05:43.471 CC lib/util/net.o 00:05:43.471 CC lib/util/pipe.o 00:05:43.471 CC lib/util/strerror_tls.o 00:05:43.471 CC lib/util/string.o 00:05:43.471 CC lib/util/uuid.o 00:05:43.471 CC lib/util/xor.o 00:05:43.471 CC lib/util/zipf.o 00:05:43.471 CC lib/util/md5.o 00:05:43.471 CC lib/vfio_user/host/vfio_user_pci.o 00:05:43.471 CC lib/vfio_user/host/vfio_user.o 00:05:43.471 LIB libspdk_dma.a 00:05:43.471 SO libspdk_dma.so.5.0 00:05:43.471 SYMLINK libspdk_dma.so 00:05:43.471 LIB libspdk_ioat.a 00:05:43.471 SO libspdk_ioat.so.7.0 00:05:43.471 SYMLINK libspdk_ioat.so 00:05:43.471 LIB libspdk_vfio_user.a 00:05:43.471 SO libspdk_vfio_user.so.5.0 00:05:43.471 SYMLINK libspdk_vfio_user.so 00:05:43.471 LIB libspdk_util.a 00:05:43.471 SO libspdk_util.so.10.0 00:05:43.471 SYMLINK libspdk_util.so 00:05:43.471 CC lib/rdma_provider/common.o 00:05:43.471 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:43.471 CC lib/conf/conf.o 00:05:43.471 CC lib/rdma_utils/rdma_utils.o 00:05:43.471 CC lib/json/json_parse.o 00:05:43.471 CC lib/json/json_util.o 00:05:43.471 CC lib/json/json_write.o 00:05:43.471 CC lib/vmd/vmd.o 00:05:43.471 CC lib/vmd/led.o 00:05:43.471 CC lib/idxd/idxd.o 00:05:43.471 CC lib/env_dpdk/env.o 00:05:43.471 CC lib/env_dpdk/memory.o 00:05:43.471 CC lib/idxd/idxd_user.o 00:05:43.471 CC lib/idxd/idxd_kernel.o 00:05:43.471 CC lib/env_dpdk/pci.o 00:05:43.471 CC lib/env_dpdk/init.o 00:05:43.471 CC lib/env_dpdk/threads.o 00:05:43.471 CC lib/env_dpdk/pci_ioat.o 00:05:43.471 CC lib/env_dpdk/pci_virtio.o 00:05:43.471 CC lib/env_dpdk/pci_vmd.o 00:05:43.471 CC lib/env_dpdk/sigbus_handler.o 00:05:43.471 CC lib/env_dpdk/pci_idxd.o 00:05:43.471 CC lib/env_dpdk/pci_event.o 00:05:43.471 CC lib/env_dpdk/pci_dpdk.o 00:05:43.471 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:43.471 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:43.471 LIB libspdk_rdma_provider.a 00:05:43.471 SO libspdk_rdma_provider.so.6.0 00:05:43.471 LIB libspdk_rdma_utils.a 00:05:43.471 LIB libspdk_conf.a 00:05:43.471 SO libspdk_conf.so.6.0 00:05:43.471 LIB libspdk_json.a 00:05:43.471 SO libspdk_rdma_utils.so.1.0 00:05:43.471 SYMLINK libspdk_rdma_provider.so 00:05:43.471 SO libspdk_json.so.6.0 00:05:43.471 SYMLINK libspdk_rdma_utils.so 00:05:43.471 SYMLINK libspdk_conf.so 00:05:43.471 SYMLINK libspdk_json.so 00:05:43.471 LIB libspdk_idxd.a 00:05:43.731 SO libspdk_idxd.so.12.1 00:05:43.731 LIB libspdk_vmd.a 00:05:43.731 SO libspdk_vmd.so.6.0 00:05:43.731 SYMLINK libspdk_idxd.so 00:05:43.731 SYMLINK libspdk_vmd.so 00:05:43.731 LIB libspdk_trace_parser.a 00:05:43.731 CC lib/jsonrpc/jsonrpc_server.o 00:05:43.731 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:43.731 CC lib/jsonrpc/jsonrpc_client.o 00:05:43.731 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:43.731 SO libspdk_trace_parser.so.6.0 00:05:43.731 SYMLINK libspdk_trace_parser.so 00:05:43.990 LIB libspdk_jsonrpc.a 00:05:43.990 SO libspdk_jsonrpc.so.6.0 00:05:43.990 SYMLINK libspdk_jsonrpc.so 00:05:44.249 LIB libspdk_env_dpdk.a 00:05:44.249 SO libspdk_env_dpdk.so.15.1 00:05:44.249 SYMLINK libspdk_env_dpdk.so 00:05:44.249 CC lib/rpc/rpc.o 00:05:44.508 LIB libspdk_rpc.a 00:05:44.508 SO libspdk_rpc.so.6.0 00:05:44.768 SYMLINK libspdk_rpc.so 00:05:45.028 CC lib/notify/notify.o 00:05:45.028 CC lib/notify/notify_rpc.o 00:05:45.028 CC lib/keyring/keyring.o 00:05:45.028 CC lib/keyring/keyring_rpc.o 00:05:45.028 CC lib/trace/trace.o 00:05:45.028 CC lib/trace/trace_flags.o 00:05:45.028 CC lib/trace/trace_rpc.o 00:05:45.028 LIB libspdk_notify.a 00:05:45.028 SO libspdk_notify.so.6.0 00:05:45.288 LIB libspdk_trace.a 00:05:45.288 LIB libspdk_keyring.a 00:05:45.288 SO libspdk_trace.so.11.0 00:05:45.288 SYMLINK libspdk_notify.so 00:05:45.288 SO libspdk_keyring.so.2.0 00:05:45.288 SYMLINK libspdk_trace.so 00:05:45.288 SYMLINK libspdk_keyring.so 00:05:45.547 CC lib/sock/sock.o 00:05:45.547 CC lib/sock/sock_rpc.o 00:05:45.547 CC lib/thread/thread.o 00:05:45.547 CC lib/thread/iobuf.o 00:05:45.807 LIB libspdk_sock.a 00:05:45.807 SO libspdk_sock.so.10.0 00:05:46.066 SYMLINK libspdk_sock.so 00:05:46.325 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:46.325 CC lib/nvme/nvme_ctrlr.o 00:05:46.325 CC lib/nvme/nvme_fabric.o 00:05:46.325 CC lib/nvme/nvme_ns_cmd.o 00:05:46.326 CC lib/nvme/nvme_ns.o 00:05:46.326 CC lib/nvme/nvme_pcie_common.o 00:05:46.326 CC lib/nvme/nvme_pcie.o 00:05:46.326 CC lib/nvme/nvme_qpair.o 00:05:46.326 CC lib/nvme/nvme.o 00:05:46.326 CC lib/nvme/nvme_quirks.o 00:05:46.326 CC lib/nvme/nvme_transport.o 00:05:46.326 CC lib/nvme/nvme_discovery.o 00:05:46.326 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:46.326 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:46.326 CC lib/nvme/nvme_tcp.o 00:05:46.326 CC lib/nvme/nvme_opal.o 00:05:46.326 CC lib/nvme/nvme_io_msg.o 00:05:46.326 CC lib/nvme/nvme_poll_group.o 00:05:46.326 CC lib/nvme/nvme_zns.o 00:05:46.326 CC lib/nvme/nvme_stubs.o 00:05:46.326 CC lib/nvme/nvme_auth.o 00:05:46.326 CC lib/nvme/nvme_cuse.o 00:05:46.326 CC lib/nvme/nvme_rdma.o 00:05:46.585 LIB libspdk_thread.a 00:05:46.844 SO libspdk_thread.so.11.0 00:05:46.844 SYMLINK libspdk_thread.so 00:05:47.103 CC lib/accel/accel.o 00:05:47.103 CC lib/accel/accel_rpc.o 00:05:47.103 CC lib/accel/accel_sw.o 00:05:47.103 CC lib/blob/blobstore.o 00:05:47.103 CC lib/blob/request.o 00:05:47.103 CC lib/blob/zeroes.o 00:05:47.103 CC lib/blob/blob_bs_dev.o 00:05:47.103 CC lib/fsdev/fsdev.o 00:05:47.103 CC lib/init/json_config.o 00:05:47.103 CC lib/fsdev/fsdev_io.o 00:05:47.103 CC lib/fsdev/fsdev_rpc.o 00:05:47.103 CC lib/init/subsystem.o 00:05:47.103 CC lib/init/subsystem_rpc.o 00:05:47.103 CC lib/init/rpc.o 00:05:47.103 CC lib/virtio/virtio.o 00:05:47.103 CC lib/virtio/virtio_vhost_user.o 00:05:47.103 CC lib/virtio/virtio_vfio_user.o 00:05:47.103 CC lib/virtio/virtio_pci.o 00:05:47.363 LIB libspdk_init.a 00:05:47.363 SO libspdk_init.so.6.0 00:05:47.363 LIB libspdk_virtio.a 00:05:47.363 SYMLINK libspdk_init.so 00:05:47.363 SO libspdk_virtio.so.7.0 00:05:47.622 SYMLINK libspdk_virtio.so 00:05:47.622 LIB libspdk_fsdev.a 00:05:47.622 SO libspdk_fsdev.so.2.0 00:05:47.622 CC lib/event/app.o 00:05:47.622 CC lib/event/reactor.o 00:05:47.622 CC lib/event/log_rpc.o 00:05:47.622 CC lib/event/app_rpc.o 00:05:47.622 CC lib/event/scheduler_static.o 00:05:47.622 SYMLINK libspdk_fsdev.so 00:05:47.882 LIB libspdk_accel.a 00:05:47.882 SO libspdk_accel.so.16.1 00:05:47.882 LIB libspdk_nvme.a 00:05:47.882 SYMLINK libspdk_accel.so 00:05:48.142 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:48.142 SO libspdk_nvme.so.14.1 00:05:48.142 LIB libspdk_event.a 00:05:48.142 SO libspdk_event.so.14.0 00:05:48.142 SYMLINK libspdk_event.so 00:05:48.142 SYMLINK libspdk_nvme.so 00:05:48.402 CC lib/bdev/bdev.o 00:05:48.402 CC lib/bdev/bdev_rpc.o 00:05:48.402 CC lib/bdev/bdev_zone.o 00:05:48.402 CC lib/bdev/part.o 00:05:48.402 CC lib/bdev/scsi_nvme.o 00:05:48.402 LIB libspdk_fuse_dispatcher.a 00:05:48.402 SO libspdk_fuse_dispatcher.so.1.0 00:05:48.662 SYMLINK libspdk_fuse_dispatcher.so 00:05:49.232 LIB libspdk_blob.a 00:05:49.232 SO libspdk_blob.so.11.0 00:05:49.232 SYMLINK libspdk_blob.so 00:05:49.804 CC lib/blobfs/blobfs.o 00:05:49.804 CC lib/blobfs/tree.o 00:05:49.804 CC lib/lvol/lvol.o 00:05:50.064 LIB libspdk_bdev.a 00:05:50.324 SO libspdk_bdev.so.17.0 00:05:50.324 LIB libspdk_blobfs.a 00:05:50.324 SO libspdk_blobfs.so.10.0 00:05:50.324 SYMLINK libspdk_bdev.so 00:05:50.324 LIB libspdk_lvol.a 00:05:50.324 SYMLINK libspdk_blobfs.so 00:05:50.324 SO libspdk_lvol.so.10.0 00:05:50.324 SYMLINK libspdk_lvol.so 00:05:50.584 CC lib/nvmf/ctrlr.o 00:05:50.584 CC lib/nvmf/ctrlr_discovery.o 00:05:50.584 CC lib/nvmf/ctrlr_bdev.o 00:05:50.584 CC lib/nbd/nbd_rpc.o 00:05:50.584 CC lib/nbd/nbd.o 00:05:50.584 CC lib/nvmf/subsystem.o 00:05:50.584 CC lib/nvmf/nvmf.o 00:05:50.584 CC lib/nvmf/nvmf_rpc.o 00:05:50.584 CC lib/nvmf/transport.o 00:05:50.584 CC lib/ublk/ublk.o 00:05:50.584 CC lib/nvmf/tcp.o 00:05:50.584 CC lib/ublk/ublk_rpc.o 00:05:50.584 CC lib/nvmf/stubs.o 00:05:50.584 CC lib/nvmf/mdns_server.o 00:05:50.584 CC lib/scsi/dev.o 00:05:50.584 CC lib/nvmf/rdma.o 00:05:50.584 CC lib/scsi/lun.o 00:05:50.584 CC lib/nvmf/auth.o 00:05:50.584 CC lib/scsi/port.o 00:05:50.584 CC lib/scsi/scsi.o 00:05:50.584 CC lib/ftl/ftl_core.o 00:05:50.584 CC lib/scsi/scsi_bdev.o 00:05:50.584 CC lib/ftl/ftl_init.o 00:05:50.584 CC lib/ftl/ftl_layout.o 00:05:50.584 CC lib/scsi/scsi_pr.o 00:05:50.584 CC lib/scsi/scsi_rpc.o 00:05:50.584 CC lib/ftl/ftl_debug.o 00:05:50.584 CC lib/scsi/task.o 00:05:50.584 CC lib/ftl/ftl_io.o 00:05:50.584 CC lib/ftl/ftl_sb.o 00:05:50.584 CC lib/ftl/ftl_l2p.o 00:05:50.584 CC lib/ftl/ftl_l2p_flat.o 00:05:50.584 CC lib/ftl/ftl_nv_cache.o 00:05:50.584 CC lib/ftl/ftl_band.o 00:05:50.584 CC lib/ftl/ftl_writer.o 00:05:50.584 CC lib/ftl/ftl_band_ops.o 00:05:50.584 CC lib/ftl/ftl_rq.o 00:05:50.584 CC lib/ftl/ftl_reloc.o 00:05:50.584 CC lib/ftl/ftl_l2p_cache.o 00:05:50.584 CC lib/ftl/ftl_p2l.o 00:05:50.584 CC lib/ftl/ftl_p2l_log.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:50.584 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:50.584 CC lib/ftl/utils/ftl_conf.o 00:05:50.584 CC lib/ftl/utils/ftl_md.o 00:05:50.584 CC lib/ftl/utils/ftl_mempool.o 00:05:50.584 CC lib/ftl/utils/ftl_property.o 00:05:50.584 CC lib/ftl/utils/ftl_bitmap.o 00:05:50.584 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:50.584 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:50.584 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:50.584 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:50.584 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:50.584 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:50.584 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:50.584 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:50.584 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:50.584 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:50.584 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:50.584 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:50.584 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:50.584 CC lib/ftl/base/ftl_base_dev.o 00:05:50.584 CC lib/ftl/base/ftl_base_bdev.o 00:05:50.584 CC lib/ftl/ftl_trace.o 00:05:51.153 LIB libspdk_nbd.a 00:05:51.153 SO libspdk_nbd.so.7.0 00:05:51.153 SYMLINK libspdk_nbd.so 00:05:51.153 LIB libspdk_scsi.a 00:05:51.412 SO libspdk_scsi.so.9.0 00:05:51.412 LIB libspdk_ublk.a 00:05:51.412 SO libspdk_ublk.so.3.0 00:05:51.412 SYMLINK libspdk_scsi.so 00:05:51.412 SYMLINK libspdk_ublk.so 00:05:51.671 LIB libspdk_ftl.a 00:05:51.671 CC lib/iscsi/conn.o 00:05:51.671 CC lib/iscsi/init_grp.o 00:05:51.671 CC lib/iscsi/iscsi.o 00:05:51.671 CC lib/iscsi/param.o 00:05:51.671 CC lib/iscsi/portal_grp.o 00:05:51.671 CC lib/iscsi/tgt_node.o 00:05:51.671 CC lib/iscsi/iscsi_subsystem.o 00:05:51.671 CC lib/vhost/vhost.o 00:05:51.671 CC lib/iscsi/iscsi_rpc.o 00:05:51.671 CC lib/iscsi/task.o 00:05:51.671 CC lib/vhost/vhost_rpc.o 00:05:51.671 CC lib/vhost/vhost_scsi.o 00:05:51.671 CC lib/vhost/vhost_blk.o 00:05:51.671 CC lib/vhost/rte_vhost_user.o 00:05:51.671 SO libspdk_ftl.so.9.0 00:05:51.932 SYMLINK libspdk_ftl.so 00:05:52.533 LIB libspdk_nvmf.a 00:05:52.533 SO libspdk_nvmf.so.20.0 00:05:52.533 LIB libspdk_vhost.a 00:05:52.533 SO libspdk_vhost.so.8.0 00:05:52.533 SYMLINK libspdk_nvmf.so 00:05:52.533 SYMLINK libspdk_vhost.so 00:05:52.794 LIB libspdk_iscsi.a 00:05:52.794 SO libspdk_iscsi.so.8.0 00:05:52.794 SYMLINK libspdk_iscsi.so 00:05:53.361 CC module/env_dpdk/env_dpdk_rpc.o 00:05:53.620 LIB libspdk_env_dpdk_rpc.a 00:05:53.620 CC module/keyring/file/keyring.o 00:05:53.620 CC module/keyring/linux/keyring_rpc.o 00:05:53.620 CC module/keyring/linux/keyring.o 00:05:53.620 CC module/keyring/file/keyring_rpc.o 00:05:53.620 CC module/accel/error/accel_error.o 00:05:53.620 CC module/accel/error/accel_error_rpc.o 00:05:53.620 CC module/sock/posix/posix.o 00:05:53.620 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:53.620 CC module/accel/ioat/accel_ioat.o 00:05:53.620 CC module/accel/dsa/accel_dsa_rpc.o 00:05:53.620 CC module/blob/bdev/blob_bdev.o 00:05:53.620 CC module/accel/ioat/accel_ioat_rpc.o 00:05:53.620 CC module/accel/dsa/accel_dsa.o 00:05:53.620 CC module/accel/iaa/accel_iaa.o 00:05:53.620 CC module/accel/iaa/accel_iaa_rpc.o 00:05:53.620 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:53.620 CC module/fsdev/aio/fsdev_aio.o 00:05:53.620 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:53.620 CC module/fsdev/aio/linux_aio_mgr.o 00:05:53.620 CC module/scheduler/gscheduler/gscheduler.o 00:05:53.620 SO libspdk_env_dpdk_rpc.so.6.0 00:05:53.620 SYMLINK libspdk_env_dpdk_rpc.so 00:05:53.620 LIB libspdk_keyring_file.a 00:05:53.620 LIB libspdk_keyring_linux.a 00:05:53.620 LIB libspdk_scheduler_gscheduler.a 00:05:53.620 LIB libspdk_accel_ioat.a 00:05:53.620 SO libspdk_keyring_file.so.2.0 00:05:53.620 SO libspdk_keyring_linux.so.1.0 00:05:53.620 SO libspdk_scheduler_gscheduler.so.4.0 00:05:53.620 LIB libspdk_scheduler_dpdk_governor.a 00:05:53.620 LIB libspdk_accel_error.a 00:05:53.620 LIB libspdk_scheduler_dynamic.a 00:05:53.620 SO libspdk_accel_ioat.so.6.0 00:05:53.620 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:53.620 LIB libspdk_accel_iaa.a 00:05:53.620 SO libspdk_accel_error.so.2.0 00:05:53.620 SYMLINK libspdk_keyring_linux.so 00:05:53.879 SYMLINK libspdk_scheduler_gscheduler.so 00:05:53.879 SO libspdk_scheduler_dynamic.so.4.0 00:05:53.879 SYMLINK libspdk_keyring_file.so 00:05:53.879 LIB libspdk_blob_bdev.a 00:05:53.879 SYMLINK libspdk_accel_ioat.so 00:05:53.879 SO libspdk_accel_iaa.so.3.0 00:05:53.879 LIB libspdk_accel_dsa.a 00:05:53.879 SO libspdk_blob_bdev.so.11.0 00:05:53.879 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:53.879 SYMLINK libspdk_accel_error.so 00:05:53.879 SYMLINK libspdk_scheduler_dynamic.so 00:05:53.879 SO libspdk_accel_dsa.so.5.0 00:05:53.879 SYMLINK libspdk_accel_iaa.so 00:05:53.879 SYMLINK libspdk_blob_bdev.so 00:05:53.879 SYMLINK libspdk_accel_dsa.so 00:05:54.139 LIB libspdk_fsdev_aio.a 00:05:54.139 SO libspdk_fsdev_aio.so.1.0 00:05:54.139 LIB libspdk_sock_posix.a 00:05:54.139 SO libspdk_sock_posix.so.6.0 00:05:54.139 SYMLINK libspdk_fsdev_aio.so 00:05:54.139 SYMLINK libspdk_sock_posix.so 00:05:54.398 CC module/bdev/null/bdev_null_rpc.o 00:05:54.398 CC module/bdev/null/bdev_null.o 00:05:54.398 CC module/bdev/malloc/bdev_malloc.o 00:05:54.398 CC module/bdev/passthru/vbdev_passthru.o 00:05:54.398 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:54.398 CC module/bdev/raid/bdev_raid.o 00:05:54.398 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:54.398 CC module/bdev/raid/bdev_raid_rpc.o 00:05:54.398 CC module/bdev/delay/vbdev_delay.o 00:05:54.398 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:54.398 CC module/bdev/raid/bdev_raid_sb.o 00:05:54.398 CC module/bdev/raid/raid0.o 00:05:54.398 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:54.398 CC module/bdev/raid/raid1.o 00:05:54.398 CC module/bdev/raid/concat.o 00:05:54.398 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:54.398 CC module/bdev/iscsi/bdev_iscsi.o 00:05:54.398 CC module/bdev/gpt/gpt.o 00:05:54.398 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:54.398 CC module/bdev/gpt/vbdev_gpt.o 00:05:54.398 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:54.398 CC module/bdev/error/vbdev_error.o 00:05:54.398 CC module/bdev/lvol/vbdev_lvol.o 00:05:54.398 CC module/bdev/split/vbdev_split.o 00:05:54.398 CC module/bdev/error/vbdev_error_rpc.o 00:05:54.398 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:54.398 CC module/bdev/split/vbdev_split_rpc.o 00:05:54.398 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:54.398 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:54.398 CC module/bdev/aio/bdev_aio.o 00:05:54.398 CC module/bdev/nvme/bdev_nvme.o 00:05:54.398 CC module/bdev/aio/bdev_aio_rpc.o 00:05:54.398 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:54.398 CC module/bdev/nvme/bdev_mdns_client.o 00:05:54.398 CC module/bdev/nvme/nvme_rpc.o 00:05:54.398 CC module/bdev/nvme/vbdev_opal.o 00:05:54.398 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:54.398 CC module/bdev/ftl/bdev_ftl.o 00:05:54.398 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:54.398 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:54.398 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:54.398 CC module/blobfs/bdev/blobfs_bdev.o 00:05:54.657 LIB libspdk_blobfs_bdev.a 00:05:54.657 SO libspdk_blobfs_bdev.so.6.0 00:05:54.657 LIB libspdk_bdev_null.a 00:05:54.657 LIB libspdk_bdev_split.a 00:05:54.657 LIB libspdk_bdev_error.a 00:05:54.657 LIB libspdk_bdev_passthru.a 00:05:54.657 LIB libspdk_bdev_gpt.a 00:05:54.657 SO libspdk_bdev_null.so.6.0 00:05:54.657 SO libspdk_bdev_error.so.6.0 00:05:54.657 SO libspdk_bdev_split.so.6.0 00:05:54.657 SO libspdk_bdev_passthru.so.6.0 00:05:54.657 LIB libspdk_bdev_ftl.a 00:05:54.657 SO libspdk_bdev_gpt.so.6.0 00:05:54.657 SYMLINK libspdk_blobfs_bdev.so 00:05:54.657 LIB libspdk_bdev_malloc.a 00:05:54.657 LIB libspdk_bdev_zone_block.a 00:05:54.657 SO libspdk_bdev_ftl.so.6.0 00:05:54.657 SYMLINK libspdk_bdev_error.so 00:05:54.657 SYMLINK libspdk_bdev_null.so 00:05:54.657 SYMLINK libspdk_bdev_split.so 00:05:54.657 LIB libspdk_bdev_aio.a 00:05:54.657 SO libspdk_bdev_malloc.so.6.0 00:05:54.657 LIB libspdk_bdev_delay.a 00:05:54.657 LIB libspdk_bdev_iscsi.a 00:05:54.657 SYMLINK libspdk_bdev_passthru.so 00:05:54.657 SO libspdk_bdev_zone_block.so.6.0 00:05:54.657 SYMLINK libspdk_bdev_gpt.so 00:05:54.657 SO libspdk_bdev_aio.so.6.0 00:05:54.657 SO libspdk_bdev_delay.so.6.0 00:05:54.657 SO libspdk_bdev_iscsi.so.6.0 00:05:54.916 SYMLINK libspdk_bdev_ftl.so 00:05:54.916 SYMLINK libspdk_bdev_malloc.so 00:05:54.916 SYMLINK libspdk_bdev_zone_block.so 00:05:54.916 SYMLINK libspdk_bdev_iscsi.so 00:05:54.916 SYMLINK libspdk_bdev_aio.so 00:05:54.916 LIB libspdk_bdev_lvol.a 00:05:54.916 SYMLINK libspdk_bdev_delay.so 00:05:54.916 SO libspdk_bdev_lvol.so.6.0 00:05:54.916 LIB libspdk_bdev_virtio.a 00:05:54.916 SO libspdk_bdev_virtio.so.6.0 00:05:54.916 SYMLINK libspdk_bdev_lvol.so 00:05:54.916 SYMLINK libspdk_bdev_virtio.so 00:05:55.177 LIB libspdk_bdev_raid.a 00:05:55.177 SO libspdk_bdev_raid.so.6.0 00:05:55.177 SYMLINK libspdk_bdev_raid.so 00:05:56.115 LIB libspdk_bdev_nvme.a 00:05:56.115 SO libspdk_bdev_nvme.so.7.0 00:05:56.375 SYMLINK libspdk_bdev_nvme.so 00:05:56.944 CC module/event/subsystems/vmd/vmd.o 00:05:56.944 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:56.944 CC module/event/subsystems/fsdev/fsdev.o 00:05:56.944 CC module/event/subsystems/iobuf/iobuf.o 00:05:56.944 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:56.944 CC module/event/subsystems/sock/sock.o 00:05:56.945 CC module/event/subsystems/keyring/keyring.o 00:05:56.945 CC module/event/subsystems/scheduler/scheduler.o 00:05:56.945 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:57.204 LIB libspdk_event_sock.a 00:05:57.204 LIB libspdk_event_vmd.a 00:05:57.204 LIB libspdk_event_fsdev.a 00:05:57.204 LIB libspdk_event_keyring.a 00:05:57.204 LIB libspdk_event_vhost_blk.a 00:05:57.204 LIB libspdk_event_iobuf.a 00:05:57.204 LIB libspdk_event_scheduler.a 00:05:57.204 SO libspdk_event_sock.so.5.0 00:05:57.204 SO libspdk_event_fsdev.so.1.0 00:05:57.204 SO libspdk_event_vmd.so.6.0 00:05:57.204 SO libspdk_event_vhost_blk.so.3.0 00:05:57.204 SO libspdk_event_keyring.so.1.0 00:05:57.204 SO libspdk_event_scheduler.so.4.0 00:05:57.204 SO libspdk_event_iobuf.so.3.0 00:05:57.204 SYMLINK libspdk_event_sock.so 00:05:57.204 SYMLINK libspdk_event_vhost_blk.so 00:05:57.204 SYMLINK libspdk_event_vmd.so 00:05:57.204 SYMLINK libspdk_event_fsdev.so 00:05:57.204 SYMLINK libspdk_event_scheduler.so 00:05:57.204 SYMLINK libspdk_event_keyring.so 00:05:57.204 SYMLINK libspdk_event_iobuf.so 00:05:57.463 CC module/event/subsystems/accel/accel.o 00:05:57.723 LIB libspdk_event_accel.a 00:05:57.723 SO libspdk_event_accel.so.6.0 00:05:57.723 SYMLINK libspdk_event_accel.so 00:05:57.982 CC module/event/subsystems/bdev/bdev.o 00:05:58.241 LIB libspdk_event_bdev.a 00:05:58.241 SO libspdk_event_bdev.so.6.0 00:05:58.241 SYMLINK libspdk_event_bdev.so 00:05:58.499 CC module/event/subsystems/scsi/scsi.o 00:05:58.499 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:58.499 CC module/event/subsystems/ublk/ublk.o 00:05:58.499 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:58.499 CC module/event/subsystems/nbd/nbd.o 00:05:58.757 LIB libspdk_event_nbd.a 00:05:58.757 LIB libspdk_event_scsi.a 00:05:58.757 LIB libspdk_event_ublk.a 00:05:58.757 SO libspdk_event_nbd.so.6.0 00:05:58.757 SO libspdk_event_ublk.so.3.0 00:05:58.757 SO libspdk_event_scsi.so.6.0 00:05:58.757 LIB libspdk_event_nvmf.a 00:05:58.757 SYMLINK libspdk_event_ublk.so 00:05:58.757 SYMLINK libspdk_event_nbd.so 00:05:58.757 SO libspdk_event_nvmf.so.6.0 00:05:58.757 SYMLINK libspdk_event_scsi.so 00:05:58.757 SYMLINK libspdk_event_nvmf.so 00:05:59.017 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:59.017 CC module/event/subsystems/iscsi/iscsi.o 00:05:59.277 LIB libspdk_event_vhost_scsi.a 00:05:59.278 LIB libspdk_event_iscsi.a 00:05:59.278 SO libspdk_event_vhost_scsi.so.3.0 00:05:59.278 SO libspdk_event_iscsi.so.6.0 00:05:59.278 SYMLINK libspdk_event_vhost_scsi.so 00:05:59.278 SYMLINK libspdk_event_iscsi.so 00:05:59.537 SO libspdk.so.6.0 00:05:59.537 SYMLINK libspdk.so 00:05:59.797 CC app/spdk_nvme_discover/discovery_aer.o 00:05:59.797 CC app/trace_record/trace_record.o 00:05:59.797 CC app/spdk_top/spdk_top.o 00:05:59.797 CXX app/trace/trace.o 00:05:59.797 CC app/spdk_lspci/spdk_lspci.o 00:05:59.797 CC test/rpc_client/rpc_client_test.o 00:05:59.797 CC app/spdk_nvme_identify/identify.o 00:05:59.797 TEST_HEADER include/spdk/accel.h 00:05:59.797 TEST_HEADER include/spdk/accel_module.h 00:05:59.797 TEST_HEADER include/spdk/base64.h 00:05:59.797 TEST_HEADER include/spdk/assert.h 00:05:59.797 TEST_HEADER include/spdk/bdev.h 00:05:59.797 TEST_HEADER include/spdk/barrier.h 00:05:59.797 CC app/spdk_nvme_perf/perf.o 00:05:59.797 TEST_HEADER include/spdk/bdev_module.h 00:05:59.797 TEST_HEADER include/spdk/bdev_zone.h 00:05:59.797 TEST_HEADER include/spdk/bit_array.h 00:05:59.797 TEST_HEADER include/spdk/bit_pool.h 00:05:59.797 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:59.797 TEST_HEADER include/spdk/blobfs.h 00:05:59.797 TEST_HEADER include/spdk/blob.h 00:05:59.797 TEST_HEADER include/spdk/blob_bdev.h 00:05:59.797 TEST_HEADER include/spdk/cpuset.h 00:05:59.797 TEST_HEADER include/spdk/config.h 00:05:59.797 TEST_HEADER include/spdk/conf.h 00:05:59.797 TEST_HEADER include/spdk/crc16.h 00:05:59.797 TEST_HEADER include/spdk/crc32.h 00:05:59.797 TEST_HEADER include/spdk/crc64.h 00:05:59.797 TEST_HEADER include/spdk/dif.h 00:05:59.797 TEST_HEADER include/spdk/endian.h 00:05:59.797 TEST_HEADER include/spdk/env_dpdk.h 00:05:59.797 TEST_HEADER include/spdk/env.h 00:05:59.797 TEST_HEADER include/spdk/event.h 00:05:59.797 TEST_HEADER include/spdk/dma.h 00:05:59.797 TEST_HEADER include/spdk/fd.h 00:05:59.797 TEST_HEADER include/spdk/fd_group.h 00:05:59.797 TEST_HEADER include/spdk/file.h 00:05:59.797 TEST_HEADER include/spdk/fsdev.h 00:05:59.797 TEST_HEADER include/spdk/ftl.h 00:05:59.797 TEST_HEADER include/spdk/fsdev_module.h 00:05:59.797 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:59.797 TEST_HEADER include/spdk/gpt_spec.h 00:05:59.797 TEST_HEADER include/spdk/hexlify.h 00:05:59.797 TEST_HEADER include/spdk/idxd.h 00:05:59.797 TEST_HEADER include/spdk/histogram_data.h 00:05:59.797 TEST_HEADER include/spdk/idxd_spec.h 00:05:59.797 TEST_HEADER include/spdk/init.h 00:05:59.797 TEST_HEADER include/spdk/ioat.h 00:05:59.797 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:59.797 TEST_HEADER include/spdk/iscsi_spec.h 00:05:59.797 TEST_HEADER include/spdk/json.h 00:05:59.797 TEST_HEADER include/spdk/ioat_spec.h 00:05:59.797 TEST_HEADER include/spdk/jsonrpc.h 00:05:59.797 TEST_HEADER include/spdk/likely.h 00:05:59.797 TEST_HEADER include/spdk/keyring_module.h 00:05:59.797 TEST_HEADER include/spdk/keyring.h 00:05:59.798 TEST_HEADER include/spdk/log.h 00:05:59.798 TEST_HEADER include/spdk/lvol.h 00:05:59.798 TEST_HEADER include/spdk/md5.h 00:05:59.798 TEST_HEADER include/spdk/memory.h 00:05:59.798 CC app/spdk_dd/spdk_dd.o 00:05:59.798 TEST_HEADER include/spdk/net.h 00:05:59.798 TEST_HEADER include/spdk/mmio.h 00:05:59.798 TEST_HEADER include/spdk/nbd.h 00:05:59.798 TEST_HEADER include/spdk/notify.h 00:05:59.798 TEST_HEADER include/spdk/nvme.h 00:05:59.798 TEST_HEADER include/spdk/nvme_intel.h 00:05:59.798 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:59.798 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:59.798 TEST_HEADER include/spdk/nvme_zns.h 00:05:59.798 TEST_HEADER include/spdk/nvme_spec.h 00:06:00.064 TEST_HEADER include/spdk/nvmf.h 00:06:00.064 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:00.064 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:00.064 TEST_HEADER include/spdk/nvmf_transport.h 00:06:00.064 TEST_HEADER include/spdk/opal.h 00:06:00.064 TEST_HEADER include/spdk/opal_spec.h 00:06:00.064 TEST_HEADER include/spdk/nvmf_spec.h 00:06:00.064 CC app/nvmf_tgt/nvmf_main.o 00:06:00.064 TEST_HEADER include/spdk/pci_ids.h 00:06:00.064 TEST_HEADER include/spdk/pipe.h 00:06:00.064 CC app/iscsi_tgt/iscsi_tgt.o 00:06:00.064 TEST_HEADER include/spdk/queue.h 00:06:00.064 TEST_HEADER include/spdk/rpc.h 00:06:00.064 TEST_HEADER include/spdk/scheduler.h 00:06:00.064 TEST_HEADER include/spdk/reduce.h 00:06:00.064 TEST_HEADER include/spdk/scsi.h 00:06:00.064 TEST_HEADER include/spdk/sock.h 00:06:00.064 TEST_HEADER include/spdk/scsi_spec.h 00:06:00.064 TEST_HEADER include/spdk/stdinc.h 00:06:00.064 TEST_HEADER include/spdk/string.h 00:06:00.064 TEST_HEADER include/spdk/trace.h 00:06:00.064 TEST_HEADER include/spdk/trace_parser.h 00:06:00.064 TEST_HEADER include/spdk/thread.h 00:06:00.064 TEST_HEADER include/spdk/tree.h 00:06:00.064 TEST_HEADER include/spdk/ublk.h 00:06:00.064 TEST_HEADER include/spdk/util.h 00:06:00.064 TEST_HEADER include/spdk/version.h 00:06:00.064 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:00.064 TEST_HEADER include/spdk/uuid.h 00:06:00.064 TEST_HEADER include/spdk/vhost.h 00:06:00.064 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:00.064 TEST_HEADER include/spdk/vmd.h 00:06:00.064 TEST_HEADER include/spdk/xor.h 00:06:00.064 TEST_HEADER include/spdk/zipf.h 00:06:00.064 CXX test/cpp_headers/accel.o 00:06:00.064 CXX test/cpp_headers/assert.o 00:06:00.064 CXX test/cpp_headers/barrier.o 00:06:00.064 CXX test/cpp_headers/accel_module.o 00:06:00.064 CXX test/cpp_headers/base64.o 00:06:00.064 CXX test/cpp_headers/bdev.o 00:06:00.064 CXX test/cpp_headers/bdev_module.o 00:06:00.064 CXX test/cpp_headers/bit_array.o 00:06:00.064 CXX test/cpp_headers/bdev_zone.o 00:06:00.064 CXX test/cpp_headers/bit_pool.o 00:06:00.064 CXX test/cpp_headers/blobfs_bdev.o 00:06:00.064 CXX test/cpp_headers/blob_bdev.o 00:06:00.064 CXX test/cpp_headers/blob.o 00:06:00.064 CXX test/cpp_headers/blobfs.o 00:06:00.064 CXX test/cpp_headers/config.o 00:06:00.064 CXX test/cpp_headers/crc32.o 00:06:00.064 CXX test/cpp_headers/conf.o 00:06:00.064 CXX test/cpp_headers/crc16.o 00:06:00.064 CXX test/cpp_headers/crc64.o 00:06:00.064 CXX test/cpp_headers/cpuset.o 00:06:00.064 CXX test/cpp_headers/dif.o 00:06:00.064 CXX test/cpp_headers/dma.o 00:06:00.064 CXX test/cpp_headers/endian.o 00:06:00.064 CXX test/cpp_headers/env_dpdk.o 00:06:00.064 CXX test/cpp_headers/env.o 00:06:00.064 CXX test/cpp_headers/fd.o 00:06:00.064 CXX test/cpp_headers/file.o 00:06:00.064 CXX test/cpp_headers/event.o 00:06:00.064 CXX test/cpp_headers/ftl.o 00:06:00.064 CXX test/cpp_headers/fuse_dispatcher.o 00:06:00.064 CXX test/cpp_headers/fd_group.o 00:06:00.064 CXX test/cpp_headers/fsdev.o 00:06:00.064 CXX test/cpp_headers/fsdev_module.o 00:06:00.064 CXX test/cpp_headers/gpt_spec.o 00:06:00.064 CXX test/cpp_headers/histogram_data.o 00:06:00.064 CXX test/cpp_headers/hexlify.o 00:06:00.064 CXX test/cpp_headers/idxd_spec.o 00:06:00.064 CXX test/cpp_headers/init.o 00:06:00.064 CXX test/cpp_headers/idxd.o 00:06:00.064 CXX test/cpp_headers/ioat_spec.o 00:06:00.064 CXX test/cpp_headers/ioat.o 00:06:00.064 CXX test/cpp_headers/iscsi_spec.o 00:06:00.064 CXX test/cpp_headers/jsonrpc.o 00:06:00.064 CXX test/cpp_headers/json.o 00:06:00.064 CC app/spdk_tgt/spdk_tgt.o 00:06:00.064 CXX test/cpp_headers/keyring.o 00:06:00.064 CXX test/cpp_headers/likely.o 00:06:00.064 CXX test/cpp_headers/log.o 00:06:00.064 CXX test/cpp_headers/keyring_module.o 00:06:00.064 CXX test/cpp_headers/lvol.o 00:06:00.064 CXX test/cpp_headers/md5.o 00:06:00.064 CXX test/cpp_headers/mmio.o 00:06:00.064 CXX test/cpp_headers/nbd.o 00:06:00.064 CXX test/cpp_headers/notify.o 00:06:00.064 CXX test/cpp_headers/memory.o 00:06:00.064 CXX test/cpp_headers/nvme.o 00:06:00.064 CXX test/cpp_headers/net.o 00:06:00.064 CXX test/cpp_headers/nvme_intel.o 00:06:00.064 CXX test/cpp_headers/nvme_ocssd.o 00:06:00.064 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:00.064 CXX test/cpp_headers/nvme_spec.o 00:06:00.064 CXX test/cpp_headers/nvmf_cmd.o 00:06:00.064 CXX test/cpp_headers/nvme_zns.o 00:06:00.064 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:00.064 CXX test/cpp_headers/nvmf.o 00:06:00.064 CXX test/cpp_headers/nvmf_spec.o 00:06:00.064 CXX test/cpp_headers/nvmf_transport.o 00:06:00.064 CC examples/util/zipf/zipf.o 00:06:00.064 CXX test/cpp_headers/opal.o 00:06:00.064 CXX test/cpp_headers/opal_spec.o 00:06:00.064 CC test/env/memory/memory_ut.o 00:06:00.064 CC examples/ioat/verify/verify.o 00:06:00.064 CC test/app/histogram_perf/histogram_perf.o 00:06:00.064 CC test/app/jsoncat/jsoncat.o 00:06:00.064 CC app/fio/bdev/fio_plugin.o 00:06:00.064 CC test/app/stub/stub.o 00:06:00.064 CC test/env/vtophys/vtophys.o 00:06:00.064 CC examples/ioat/perf/perf.o 00:06:00.064 CC test/app/bdev_svc/bdev_svc.o 00:06:00.064 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:00.064 CC app/fio/nvme/fio_plugin.o 00:06:00.064 CC test/thread/poller_perf/poller_perf.o 00:06:00.064 CC test/dma/test_dma/test_dma.o 00:06:00.064 CC test/env/pci/pci_ut.o 00:06:00.334 LINK spdk_nvme_discover 00:06:00.334 LINK rpc_client_test 00:06:00.334 LINK spdk_lspci 00:06:00.604 LINK spdk_trace_record 00:06:00.604 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:00.604 LINK zipf 00:06:00.604 CC test/env/mem_callbacks/mem_callbacks.o 00:06:00.604 LINK histogram_perf 00:06:00.604 LINK iscsi_tgt 00:06:00.604 LINK jsoncat 00:06:00.604 LINK spdk_tgt 00:06:00.604 CXX test/cpp_headers/pci_ids.o 00:06:00.604 CXX test/cpp_headers/pipe.o 00:06:00.604 LINK vtophys 00:06:00.604 CXX test/cpp_headers/queue.o 00:06:00.604 CXX test/cpp_headers/reduce.o 00:06:00.604 CXX test/cpp_headers/rpc.o 00:06:00.604 CXX test/cpp_headers/scheduler.o 00:06:00.604 CXX test/cpp_headers/scsi.o 00:06:00.604 CXX test/cpp_headers/scsi_spec.o 00:06:00.604 CXX test/cpp_headers/sock.o 00:06:00.604 CXX test/cpp_headers/stdinc.o 00:06:00.604 LINK env_dpdk_post_init 00:06:00.604 LINK nvmf_tgt 00:06:00.604 LINK stub 00:06:00.604 CXX test/cpp_headers/string.o 00:06:00.604 CXX test/cpp_headers/thread.o 00:06:00.604 CXX test/cpp_headers/trace.o 00:06:00.604 CXX test/cpp_headers/tree.o 00:06:00.604 CXX test/cpp_headers/ublk.o 00:06:00.604 CXX test/cpp_headers/util.o 00:06:00.604 CXX test/cpp_headers/trace_parser.o 00:06:00.604 CXX test/cpp_headers/uuid.o 00:06:00.604 CXX test/cpp_headers/version.o 00:06:00.604 CXX test/cpp_headers/vfio_user_pci.o 00:06:00.604 LINK interrupt_tgt 00:06:00.604 CXX test/cpp_headers/vfio_user_spec.o 00:06:00.604 CXX test/cpp_headers/vhost.o 00:06:00.604 CXX test/cpp_headers/vmd.o 00:06:00.604 CXX test/cpp_headers/zipf.o 00:06:00.604 CXX test/cpp_headers/xor.o 00:06:00.604 LINK bdev_svc 00:06:00.604 LINK ioat_perf 00:06:00.604 LINK spdk_dd 00:06:00.866 LINK poller_perf 00:06:00.866 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:00.866 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:00.866 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:00.866 LINK verify 00:06:01.124 LINK test_dma 00:06:01.124 LINK spdk_trace 00:06:01.124 LINK spdk_nvme_perf 00:06:01.124 LINK pci_ut 00:06:01.124 LINK nvme_fuzz 00:06:01.124 CC examples/vmd/led/led.o 00:06:01.124 CC examples/vmd/lsvmd/lsvmd.o 00:06:01.124 CC examples/idxd/perf/perf.o 00:06:01.124 CC examples/sock/hello_world/hello_sock.o 00:06:01.124 CC examples/thread/thread/thread_ex.o 00:06:01.124 CC test/event/reactor/reactor.o 00:06:01.124 CC test/event/reactor_perf/reactor_perf.o 00:06:01.124 CC test/event/event_perf/event_perf.o 00:06:01.124 LINK vhost_fuzz 00:06:01.124 CC test/event/app_repeat/app_repeat.o 00:06:01.125 CC test/event/scheduler/scheduler.o 00:06:01.125 LINK spdk_top 00:06:01.383 LINK spdk_bdev 00:06:01.383 LINK spdk_nvme 00:06:01.383 LINK mem_callbacks 00:06:01.383 LINK led 00:06:01.383 LINK lsvmd 00:06:01.383 LINK reactor 00:06:01.383 LINK event_perf 00:06:01.383 LINK reactor_perf 00:06:01.383 LINK spdk_nvme_identify 00:06:01.383 LINK hello_sock 00:06:01.383 LINK app_repeat 00:06:01.383 LINK thread 00:06:01.383 LINK scheduler 00:06:01.383 CC app/vhost/vhost.o 00:06:01.383 LINK idxd_perf 00:06:01.642 CC test/nvme/sgl/sgl.o 00:06:01.642 CC test/nvme/reset/reset.o 00:06:01.642 CC test/nvme/fused_ordering/fused_ordering.o 00:06:01.642 CC test/nvme/compliance/nvme_compliance.o 00:06:01.642 CC test/nvme/overhead/overhead.o 00:06:01.642 CC test/nvme/e2edp/nvme_dp.o 00:06:01.642 CC test/nvme/boot_partition/boot_partition.o 00:06:01.642 CC test/nvme/fdp/fdp.o 00:06:01.642 CC test/nvme/aer/aer.o 00:06:01.642 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:01.642 CC test/nvme/cuse/cuse.o 00:06:01.642 CC test/nvme/simple_copy/simple_copy.o 00:06:01.642 CC test/nvme/err_injection/err_injection.o 00:06:01.642 CC test/nvme/connect_stress/connect_stress.o 00:06:01.642 CC test/nvme/reserve/reserve.o 00:06:01.642 CC test/nvme/startup/startup.o 00:06:01.642 CC test/accel/dif/dif.o 00:06:01.642 LINK memory_ut 00:06:01.642 CC test/blobfs/mkfs/mkfs.o 00:06:01.642 LINK vhost 00:06:01.642 CC test/lvol/esnap/esnap.o 00:06:01.901 LINK boot_partition 00:06:01.901 LINK doorbell_aers 00:06:01.901 LINK err_injection 00:06:01.901 LINK fused_ordering 00:06:01.901 LINK startup 00:06:01.901 LINK connect_stress 00:06:01.901 CC examples/nvme/hello_world/hello_world.o 00:06:01.901 CC examples/nvme/reconnect/reconnect.o 00:06:01.901 CC examples/nvme/arbitration/arbitration.o 00:06:01.901 LINK reserve 00:06:01.901 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:01.901 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:01.901 CC examples/nvme/hotplug/hotplug.o 00:06:01.901 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:01.901 CC examples/nvme/abort/abort.o 00:06:01.901 LINK simple_copy 00:06:01.901 LINK sgl 00:06:01.901 LINK nvme_dp 00:06:01.901 LINK reset 00:06:01.901 LINK overhead 00:06:01.901 LINK aer 00:06:01.901 LINK mkfs 00:06:01.901 LINK nvme_compliance 00:06:01.901 LINK fdp 00:06:01.901 CC examples/accel/perf/accel_perf.o 00:06:01.901 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:01.901 CC examples/blob/hello_world/hello_blob.o 00:06:01.901 CC examples/blob/cli/blobcli.o 00:06:01.901 LINK cmb_copy 00:06:01.901 LINK pmr_persistence 00:06:02.161 LINK hotplug 00:06:02.161 LINK hello_world 00:06:02.161 LINK arbitration 00:06:02.161 LINK reconnect 00:06:02.161 LINK abort 00:06:02.161 LINK dif 00:06:02.161 LINK hello_blob 00:06:02.161 LINK iscsi_fuzz 00:06:02.161 LINK nvme_manage 00:06:02.161 LINK hello_fsdev 00:06:02.420 LINK accel_perf 00:06:02.420 LINK blobcli 00:06:02.679 LINK cuse 00:06:02.679 CC test/bdev/bdevio/bdevio.o 00:06:02.939 CC examples/bdev/hello_world/hello_bdev.o 00:06:02.939 CC examples/bdev/bdevperf/bdevperf.o 00:06:02.939 LINK bdevio 00:06:02.939 LINK hello_bdev 00:06:03.508 LINK bdevperf 00:06:04.077 CC examples/nvmf/nvmf/nvmf.o 00:06:04.077 LINK nvmf 00:06:05.458 LINK esnap 00:06:05.458 00:06:05.458 real 0m55.327s 00:06:05.458 user 8m5.322s 00:06:05.458 sys 3m34.999s 00:06:05.458 08:42:28 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:05.458 08:42:28 make -- common/autotest_common.sh@10 -- $ set +x 00:06:05.458 ************************************ 00:06:05.458 END TEST make 00:06:05.458 ************************************ 00:06:05.458 08:42:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:05.458 08:42:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:05.458 08:42:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:05.458 08:42:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.458 08:42:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:05.458 08:42:28 -- pm/common@44 -- $ pid=196223 00:06:05.458 08:42:28 -- pm/common@50 -- $ kill -TERM 196223 00:06:05.458 08:42:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.458 08:42:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:05.458 08:42:28 -- pm/common@44 -- $ pid=196224 00:06:05.458 08:42:28 -- pm/common@50 -- $ kill -TERM 196224 00:06:05.458 08:42:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.458 08:42:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:05.458 08:42:28 -- pm/common@44 -- $ pid=196226 00:06:05.458 08:42:28 -- pm/common@50 -- $ kill -TERM 196226 00:06:05.458 08:42:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.458 08:42:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:05.458 08:42:28 -- pm/common@44 -- $ pid=196250 00:06:05.458 08:42:28 -- pm/common@50 -- $ sudo -E kill -TERM 196250 00:06:05.718 08:42:28 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:05.718 08:42:28 -- common/autotest_common.sh@1689 -- # lcov --version 00:06:05.718 08:42:28 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:05.718 08:42:28 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:05.718 08:42:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.718 08:42:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.718 08:42:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.718 08:42:28 -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.718 08:42:28 -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.718 08:42:28 -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.718 08:42:28 -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.718 08:42:28 -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.718 08:42:28 -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.718 08:42:28 -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.718 08:42:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.718 08:42:28 -- scripts/common.sh@344 -- # case "$op" in 00:06:05.718 08:42:28 -- scripts/common.sh@345 -- # : 1 00:06:05.718 08:42:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.718 08:42:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.719 08:42:28 -- scripts/common.sh@365 -- # decimal 1 00:06:05.719 08:42:28 -- scripts/common.sh@353 -- # local d=1 00:06:05.719 08:42:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.719 08:42:28 -- scripts/common.sh@355 -- # echo 1 00:06:05.719 08:42:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.719 08:42:28 -- scripts/common.sh@366 -- # decimal 2 00:06:05.719 08:42:28 -- scripts/common.sh@353 -- # local d=2 00:06:05.719 08:42:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.719 08:42:28 -- scripts/common.sh@355 -- # echo 2 00:06:05.719 08:42:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.719 08:42:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.719 08:42:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.719 08:42:28 -- scripts/common.sh@368 -- # return 0 00:06:05.719 08:42:28 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.719 08:42:28 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:05.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.719 --rc genhtml_branch_coverage=1 00:06:05.719 --rc genhtml_function_coverage=1 00:06:05.719 --rc genhtml_legend=1 00:06:05.719 --rc geninfo_all_blocks=1 00:06:05.719 --rc geninfo_unexecuted_blocks=1 00:06:05.719 00:06:05.719 ' 00:06:05.719 08:42:28 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:05.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.719 --rc genhtml_branch_coverage=1 00:06:05.719 --rc genhtml_function_coverage=1 00:06:05.719 --rc genhtml_legend=1 00:06:05.719 --rc geninfo_all_blocks=1 00:06:05.719 --rc geninfo_unexecuted_blocks=1 00:06:05.719 00:06:05.719 ' 00:06:05.719 08:42:28 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:05.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.719 --rc genhtml_branch_coverage=1 00:06:05.719 --rc genhtml_function_coverage=1 00:06:05.719 --rc genhtml_legend=1 00:06:05.719 --rc geninfo_all_blocks=1 00:06:05.719 --rc geninfo_unexecuted_blocks=1 00:06:05.719 00:06:05.719 ' 00:06:05.719 08:42:28 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:05.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.719 --rc genhtml_branch_coverage=1 00:06:05.719 --rc genhtml_function_coverage=1 00:06:05.719 --rc genhtml_legend=1 00:06:05.719 --rc geninfo_all_blocks=1 00:06:05.719 --rc geninfo_unexecuted_blocks=1 00:06:05.719 00:06:05.719 ' 00:06:05.719 08:42:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.719 08:42:28 -- nvmf/common.sh@7 -- # uname -s 00:06:05.719 08:42:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.719 08:42:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.719 08:42:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.719 08:42:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.719 08:42:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.719 08:42:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.719 08:42:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.719 08:42:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.719 08:42:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.719 08:42:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.719 08:42:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:05.719 08:42:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:05.719 08:42:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.719 08:42:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.719 08:42:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.719 08:42:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.719 08:42:28 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:05.719 08:42:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.719 08:42:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.719 08:42:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.719 08:42:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.719 08:42:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.719 08:42:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.719 08:42:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.719 08:42:28 -- paths/export.sh@5 -- # export PATH 00:06:05.719 08:42:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.719 08:42:28 -- nvmf/common.sh@51 -- # : 0 00:06:05.719 08:42:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.719 08:42:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.719 08:42:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.719 08:42:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.719 08:42:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.719 08:42:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.719 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.719 08:42:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.719 08:42:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.719 08:42:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.719 08:42:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:05.719 08:42:28 -- spdk/autotest.sh@32 -- # uname -s 00:06:05.719 08:42:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:05.719 08:42:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:05.719 08:42:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:06:05.719 08:42:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:05.719 08:42:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:06:05.719 08:42:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:05.979 08:42:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:05.979 08:42:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:05.979 08:42:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:05.979 08:42:28 -- spdk/autotest.sh@48 -- # udevadm_pid=258977 00:06:05.979 08:42:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:05.979 08:42:28 -- pm/common@17 -- # local monitor 00:06:05.979 08:42:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.979 08:42:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.979 08:42:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.979 08:42:28 -- pm/common@21 -- # date +%s 00:06:05.979 08:42:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.979 08:42:28 -- pm/common@21 -- # date +%s 00:06:05.979 08:42:28 -- pm/common@25 -- # sleep 1 00:06:05.979 08:42:28 -- pm/common@21 -- # date +%s 00:06:05.979 08:42:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730878948 00:06:05.979 08:42:28 -- pm/common@21 -- # date +%s 00:06:05.979 08:42:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730878948 00:06:05.979 08:42:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730878948 00:06:05.979 08:42:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730878948 00:06:05.979 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730878948_collect-cpu-load.pm.log 00:06:05.979 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730878948_collect-vmstat.pm.log 00:06:05.979 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730878948_collect-cpu-temp.pm.log 00:06:05.979 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730878948_collect-bmc-pm.bmc.pm.log 00:06:06.917 08:42:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:06.917 08:42:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:06.917 08:42:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.917 08:42:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.917 08:42:29 -- spdk/autotest.sh@59 -- # create_test_list 00:06:06.917 08:42:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:06.917 08:42:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.917 08:42:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:06:06.917 08:42:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:06.917 08:42:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:06.917 08:42:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:06:06.917 08:42:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:06.917 08:42:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:06.917 08:42:29 -- common/autotest_common.sh@1453 -- # uname 00:06:06.917 08:42:29 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:06:06.917 08:42:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:06.917 08:42:29 -- common/autotest_common.sh@1473 -- # uname 00:06:06.917 08:42:29 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:06:06.917 08:42:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:06.917 08:42:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:06.917 lcov: LCOV version 1.15 00:06:06.917 08:42:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:06:19.130 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:19.130 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:31.346 08:42:54 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:31.346 08:42:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.346 08:42:54 -- common/autotest_common.sh@10 -- # set +x 00:06:31.346 08:42:54 -- spdk/autotest.sh@78 -- # rm -f 00:06:31.346 08:42:54 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:33.889 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:06:33.889 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:06:33.889 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:06:33.889 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:06:33.889 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:06:34.149 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:06:34.409 08:42:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:34.409 08:42:57 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:06:34.409 08:42:57 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:06:34.409 08:42:57 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:06:34.409 08:42:57 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:34.409 08:42:57 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:06:34.409 08:42:57 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:06:34.409 08:42:57 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:34.409 08:42:57 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:34.409 08:42:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:34.409 08:42:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.409 08:42:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:34.409 08:42:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:34.409 08:42:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:34.409 08:42:57 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:34.409 No valid GPT data, bailing 00:06:34.409 08:42:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:34.409 08:42:57 -- scripts/common.sh@394 -- # pt= 00:06:34.409 08:42:57 -- scripts/common.sh@395 -- # return 1 00:06:34.409 08:42:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:34.409 1+0 records in 00:06:34.409 1+0 records out 00:06:34.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00157112 s, 667 MB/s 00:06:34.409 08:42:57 -- spdk/autotest.sh@105 -- # sync 00:06:34.409 08:42:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:34.409 08:42:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:34.409 08:42:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:39.696 08:43:02 -- spdk/autotest.sh@111 -- # uname -s 00:06:39.696 08:43:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:39.696 08:43:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:39.696 08:43:02 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:06:42.991 Hugepages 00:06:42.991 node hugesize free / total 00:06:42.991 node0 1048576kB 0 / 0 00:06:42.991 node0 2048kB 0 / 0 00:06:42.991 node1 1048576kB 0 / 0 00:06:42.991 node1 2048kB 0 / 0 00:06:42.991 00:06:42.991 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:42.991 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:42.991 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:42.991 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:42.991 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:42.991 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:42.991 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:42.991 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:42.991 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:42.991 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:42.991 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:42.991 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:42.991 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:42.991 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:42.991 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:42.991 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:42.991 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:42.992 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:42.992 08:43:05 -- spdk/autotest.sh@117 -- # uname -s 00:06:42.992 08:43:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:42.992 08:43:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:42.992 08:43:05 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:45.531 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:45.531 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:45.531 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:45.531 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:45.531 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:45.531 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:45.531 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:45.531 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:45.531 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:45.791 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:45.791 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:45.791 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:45.791 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:45.791 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:45.791 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:45.791 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:47.172 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:47.172 08:43:10 -- common/autotest_common.sh@1513 -- # sleep 1 00:06:48.111 08:43:11 -- common/autotest_common.sh@1514 -- # bdfs=() 00:06:48.111 08:43:11 -- common/autotest_common.sh@1514 -- # local bdfs 00:06:48.111 08:43:11 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:06:48.111 08:43:11 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:06:48.111 08:43:11 -- common/autotest_common.sh@1494 -- # bdfs=() 00:06:48.111 08:43:11 -- common/autotest_common.sh@1494 -- # local bdfs 00:06:48.111 08:43:11 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:48.111 08:43:11 -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:48.111 08:43:11 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:06:48.371 08:43:11 -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:06:48.371 08:43:11 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:5e:00.0 00:06:48.371 08:43:11 -- common/autotest_common.sh@1518 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:50.910 Waiting for block devices as requested 00:06:51.170 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:06:51.170 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:51.170 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:51.430 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:51.430 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:51.430 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:51.690 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:51.690 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:51.690 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:51.690 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:51.951 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:51.951 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:51.951 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:52.211 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:52.211 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:52.211 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:52.471 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:52.471 08:43:15 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:06:52.471 08:43:15 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:06:52.471 08:43:15 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 00:06:52.471 08:43:15 -- common/autotest_common.sh@1483 -- # grep 0000:5e:00.0/nvme/nvme 00:06:52.471 08:43:15 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:52.471 08:43:15 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:06:52.471 08:43:15 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:52.471 08:43:15 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:06:52.471 08:43:15 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:06:52.471 08:43:15 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:06:52.471 08:43:15 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:06:52.471 08:43:15 -- common/autotest_common.sh@1527 -- # grep oacs 00:06:52.471 08:43:15 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:06:52.472 08:43:15 -- common/autotest_common.sh@1527 -- # oacs=' 0xe' 00:06:52.472 08:43:15 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:06:52.472 08:43:15 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:06:52.472 08:43:15 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:06:52.472 08:43:15 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:06:52.472 08:43:15 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:06:52.472 08:43:15 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:06:52.472 08:43:15 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:06:52.472 08:43:15 -- common/autotest_common.sh@1539 -- # continue 00:06:52.472 08:43:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:52.472 08:43:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:52.472 08:43:15 -- common/autotest_common.sh@10 -- # set +x 00:06:52.472 08:43:15 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:52.472 08:43:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.472 08:43:15 -- common/autotest_common.sh@10 -- # set +x 00:06:52.472 08:43:15 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:55.767 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:55.767 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:57.149 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:57.149 08:43:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:57.149 08:43:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:57.149 08:43:19 -- common/autotest_common.sh@10 -- # set +x 00:06:57.149 08:43:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:57.149 08:43:19 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:06:57.149 08:43:19 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:06:57.149 08:43:19 -- common/autotest_common.sh@1559 -- # bdfs=() 00:06:57.149 08:43:19 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:06:57.149 08:43:19 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:06:57.149 08:43:19 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:06:57.150 08:43:19 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:06:57.150 08:43:19 -- common/autotest_common.sh@1494 -- # bdfs=() 00:06:57.150 08:43:19 -- common/autotest_common.sh@1494 -- # local bdfs 00:06:57.150 08:43:19 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:57.150 08:43:19 -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:57.150 08:43:19 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:06:57.150 08:43:20 -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:06:57.150 08:43:20 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:5e:00.0 00:06:57.150 08:43:20 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:06:57.150 08:43:20 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:06:57.150 08:43:20 -- common/autotest_common.sh@1562 -- # device=0x0a54 00:06:57.150 08:43:20 -- common/autotest_common.sh@1563 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:57.150 08:43:20 -- common/autotest_common.sh@1564 -- # bdfs+=($bdf) 00:06:57.150 08:43:20 -- common/autotest_common.sh@1568 -- # (( 1 > 0 )) 00:06:57.150 08:43:20 -- common/autotest_common.sh@1569 -- # printf '%s\n' 0000:5e:00.0 00:06:57.150 08:43:20 -- common/autotest_common.sh@1575 -- # [[ -z 0000:5e:00.0 ]] 00:06:57.150 08:43:20 -- common/autotest_common.sh@1580 -- # spdk_tgt_pid=273166 00:06:57.150 08:43:20 -- common/autotest_common.sh@1581 -- # waitforlisten 273166 00:06:57.150 08:43:20 -- common/autotest_common.sh@1579 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:57.150 08:43:20 -- common/autotest_common.sh@831 -- # '[' -z 273166 ']' 00:06:57.150 08:43:20 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.150 08:43:20 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.150 08:43:20 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.150 08:43:20 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.150 08:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:57.150 [2024-11-06 08:43:20.081162] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:06:57.150 [2024-11-06 08:43:20.081235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273166 ] 00:06:57.150 [2024-11-06 08:43:20.157617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.409 [2024-11-06 08:43:20.201376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.977 08:43:20 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.977 08:43:20 -- common/autotest_common.sh@864 -- # return 0 00:06:57.978 08:43:20 -- common/autotest_common.sh@1583 -- # bdf_id=0 00:06:57.978 08:43:20 -- common/autotest_common.sh@1584 -- # for bdf in "${bdfs[@]}" 00:06:57.978 08:43:20 -- common/autotest_common.sh@1585 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:07:01.267 nvme0n1 00:07:01.267 08:43:23 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:01.267 [2024-11-06 08:43:24.088817] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:07:01.267 request: 00:07:01.267 { 00:07:01.267 "nvme_ctrlr_name": "nvme0", 00:07:01.267 "password": "test", 00:07:01.267 "method": "bdev_nvme_opal_revert", 00:07:01.267 "req_id": 1 00:07:01.267 } 00:07:01.267 Got JSON-RPC error response 00:07:01.267 response: 00:07:01.267 { 00:07:01.267 "code": -32602, 00:07:01.267 "message": "Invalid parameters" 00:07:01.267 } 00:07:01.267 08:43:24 -- common/autotest_common.sh@1587 -- # true 00:07:01.267 08:43:24 -- common/autotest_common.sh@1588 -- # (( ++bdf_id )) 00:07:01.267 08:43:24 -- common/autotest_common.sh@1591 -- # killprocess 273166 00:07:01.267 08:43:24 -- common/autotest_common.sh@950 -- # '[' -z 273166 ']' 00:07:01.267 08:43:24 -- common/autotest_common.sh@954 -- # kill -0 273166 00:07:01.267 08:43:24 -- common/autotest_common.sh@955 -- # uname 00:07:01.267 08:43:24 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.267 08:43:24 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 273166 00:07:01.267 08:43:24 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.267 08:43:24 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.267 08:43:24 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 273166' 00:07:01.267 killing process with pid 273166 00:07:01.268 08:43:24 -- common/autotest_common.sh@969 -- # kill 273166 00:07:01.268 08:43:24 -- common/autotest_common.sh@974 -- # wait 273166 00:07:03.806 08:43:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:03.806 08:43:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:03.806 08:43:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:03.806 08:43:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:03.806 08:43:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:03.806 08:43:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.806 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.806 08:43:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:03.806 08:43:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:07:03.806 08:43:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.806 08:43:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.806 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.806 ************************************ 00:07:03.806 START TEST env 00:07:03.806 ************************************ 00:07:03.806 08:43:26 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:07:03.806 * Looking for test storage... 00:07:03.806 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:07:03.806 08:43:26 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:03.806 08:43:26 env -- common/autotest_common.sh@1689 -- # lcov --version 00:07:03.806 08:43:26 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:03.806 08:43:26 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:03.806 08:43:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.806 08:43:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.806 08:43:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.806 08:43:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.807 08:43:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.807 08:43:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.807 08:43:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.807 08:43:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.807 08:43:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.807 08:43:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.807 08:43:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.807 08:43:26 env -- scripts/common.sh@344 -- # case "$op" in 00:07:03.807 08:43:26 env -- scripts/common.sh@345 -- # : 1 00:07:03.807 08:43:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.807 08:43:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.807 08:43:26 env -- scripts/common.sh@365 -- # decimal 1 00:07:03.807 08:43:26 env -- scripts/common.sh@353 -- # local d=1 00:07:03.807 08:43:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.807 08:43:26 env -- scripts/common.sh@355 -- # echo 1 00:07:03.807 08:43:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.807 08:43:26 env -- scripts/common.sh@366 -- # decimal 2 00:07:03.807 08:43:26 env -- scripts/common.sh@353 -- # local d=2 00:07:03.807 08:43:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.807 08:43:26 env -- scripts/common.sh@355 -- # echo 2 00:07:03.807 08:43:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.807 08:43:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.807 08:43:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.807 08:43:26 env -- scripts/common.sh@368 -- # return 0 00:07:03.807 08:43:26 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.807 08:43:26 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.807 --rc genhtml_branch_coverage=1 00:07:03.807 --rc genhtml_function_coverage=1 00:07:03.807 --rc genhtml_legend=1 00:07:03.807 --rc geninfo_all_blocks=1 00:07:03.807 --rc geninfo_unexecuted_blocks=1 00:07:03.807 00:07:03.807 ' 00:07:03.807 08:43:26 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.807 --rc genhtml_branch_coverage=1 00:07:03.807 --rc genhtml_function_coverage=1 00:07:03.807 --rc genhtml_legend=1 00:07:03.807 --rc geninfo_all_blocks=1 00:07:03.807 --rc geninfo_unexecuted_blocks=1 00:07:03.807 00:07:03.807 ' 00:07:03.807 08:43:26 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.807 --rc genhtml_branch_coverage=1 00:07:03.807 --rc genhtml_function_coverage=1 00:07:03.807 --rc genhtml_legend=1 00:07:03.807 --rc geninfo_all_blocks=1 00:07:03.807 --rc geninfo_unexecuted_blocks=1 00:07:03.807 00:07:03.807 ' 00:07:03.807 08:43:26 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.807 --rc genhtml_branch_coverage=1 00:07:03.807 --rc genhtml_function_coverage=1 00:07:03.807 --rc genhtml_legend=1 00:07:03.807 --rc geninfo_all_blocks=1 00:07:03.807 --rc geninfo_unexecuted_blocks=1 00:07:03.807 00:07:03.807 ' 00:07:03.807 08:43:26 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:07:03.807 08:43:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.807 08:43:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.807 08:43:26 env -- common/autotest_common.sh@10 -- # set +x 00:07:03.807 ************************************ 00:07:03.807 START TEST env_memory 00:07:03.807 ************************************ 00:07:03.807 08:43:26 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:07:03.807 00:07:03.807 00:07:03.807 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.807 http://cunit.sourceforge.net/ 00:07:03.807 00:07:03.807 00:07:03.807 Suite: memory 00:07:03.807 Test: alloc and free memory map ...[2024-11-06 08:43:26.595960] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:03.807 passed 00:07:03.807 Test: mem map translation ...[2024-11-06 08:43:26.614941] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:03.807 [2024-11-06 08:43:26.614958] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:03.807 [2024-11-06 08:43:26.614994] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:03.807 [2024-11-06 08:43:26.615000] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:03.807 passed 00:07:03.807 Test: mem map registration ...[2024-11-06 08:43:26.653674] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:03.807 [2024-11-06 08:43:26.653692] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:03.807 passed 00:07:03.807 Test: mem map adjacent registrations ...passed 00:07:03.807 00:07:03.807 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.807 suites 1 1 n/a 0 0 00:07:03.807 tests 4 4 4 0 0 00:07:03.807 asserts 152 152 152 0 n/a 00:07:03.807 00:07:03.807 Elapsed time = 0.143 seconds 00:07:03.807 00:07:03.807 real 0m0.156s 00:07:03.807 user 0m0.149s 00:07:03.807 sys 0m0.006s 00:07:03.807 08:43:26 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.807 08:43:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:03.807 ************************************ 00:07:03.807 END TEST env_memory 00:07:03.807 ************************************ 00:07:03.807 08:43:26 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:03.807 08:43:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.807 08:43:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.807 08:43:26 env -- common/autotest_common.sh@10 -- # set +x 00:07:03.807 ************************************ 00:07:03.807 START TEST env_vtophys 00:07:03.807 ************************************ 00:07:03.807 08:43:26 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:03.807 EAL: lib.eal log level changed from notice to debug 00:07:03.807 EAL: Detected lcore 0 as core 0 on socket 0 00:07:03.807 EAL: Detected lcore 1 as core 1 on socket 0 00:07:03.807 EAL: Detected lcore 2 as core 2 on socket 0 00:07:03.807 EAL: Detected lcore 3 as core 3 on socket 0 00:07:03.807 EAL: Detected lcore 4 as core 4 on socket 0 00:07:03.807 EAL: Detected lcore 5 as core 5 on socket 0 00:07:03.807 EAL: Detected lcore 6 as core 6 on socket 0 00:07:03.807 EAL: Detected lcore 7 as core 8 on socket 0 00:07:03.807 EAL: Detected lcore 8 as core 9 on socket 0 00:07:03.807 EAL: Detected lcore 9 as core 10 on socket 0 00:07:03.807 EAL: Detected lcore 10 as core 11 on socket 0 00:07:03.807 EAL: Detected lcore 11 as core 12 on socket 0 00:07:03.807 EAL: Detected lcore 12 as core 13 on socket 0 00:07:03.807 EAL: Detected lcore 13 as core 16 on socket 0 00:07:03.807 EAL: Detected lcore 14 as core 17 on socket 0 00:07:03.807 EAL: Detected lcore 15 as core 18 on socket 0 00:07:03.807 EAL: Detected lcore 16 as core 19 on socket 0 00:07:03.807 EAL: Detected lcore 17 as core 20 on socket 0 00:07:03.807 EAL: Detected lcore 18 as core 21 on socket 0 00:07:03.807 EAL: Detected lcore 19 as core 25 on socket 0 00:07:03.807 EAL: Detected lcore 20 as core 26 on socket 0 00:07:03.807 EAL: Detected lcore 21 as core 27 on socket 0 00:07:03.807 EAL: Detected lcore 22 as core 28 on socket 0 00:07:03.807 EAL: Detected lcore 23 as core 29 on socket 0 00:07:03.807 EAL: Detected lcore 24 as core 0 on socket 1 00:07:03.807 EAL: Detected lcore 25 as core 1 on socket 1 00:07:03.807 EAL: Detected lcore 26 as core 2 on socket 1 00:07:03.807 EAL: Detected lcore 27 as core 3 on socket 1 00:07:03.807 EAL: Detected lcore 28 as core 4 on socket 1 00:07:03.807 EAL: Detected lcore 29 as core 5 on socket 1 00:07:03.807 EAL: Detected lcore 30 as core 6 on socket 1 00:07:03.807 EAL: Detected lcore 31 as core 8 on socket 1 00:07:03.807 EAL: Detected lcore 32 as core 10 on socket 1 00:07:03.807 EAL: Detected lcore 33 as core 11 on socket 1 00:07:03.807 EAL: Detected lcore 34 as core 12 on socket 1 00:07:03.807 EAL: Detected lcore 35 as core 13 on socket 1 00:07:03.807 EAL: Detected lcore 36 as core 16 on socket 1 00:07:03.807 EAL: Detected lcore 37 as core 17 on socket 1 00:07:03.807 EAL: Detected lcore 38 as core 18 on socket 1 00:07:03.808 EAL: Detected lcore 39 as core 19 on socket 1 00:07:03.808 EAL: Detected lcore 40 as core 20 on socket 1 00:07:03.808 EAL: Detected lcore 41 as core 21 on socket 1 00:07:03.808 EAL: Detected lcore 42 as core 24 on socket 1 00:07:03.808 EAL: Detected lcore 43 as core 25 on socket 1 00:07:03.808 EAL: Detected lcore 44 as core 26 on socket 1 00:07:03.808 EAL: Detected lcore 45 as core 27 on socket 1 00:07:03.808 EAL: Detected lcore 46 as core 28 on socket 1 00:07:03.808 EAL: Detected lcore 47 as core 29 on socket 1 00:07:03.808 EAL: Detected lcore 48 as core 0 on socket 0 00:07:03.808 EAL: Detected lcore 49 as core 1 on socket 0 00:07:03.808 EAL: Detected lcore 50 as core 2 on socket 0 00:07:03.808 EAL: Detected lcore 51 as core 3 on socket 0 00:07:03.808 EAL: Detected lcore 52 as core 4 on socket 0 00:07:03.808 EAL: Detected lcore 53 as core 5 on socket 0 00:07:03.808 EAL: Detected lcore 54 as core 6 on socket 0 00:07:03.808 EAL: Detected lcore 55 as core 8 on socket 0 00:07:03.808 EAL: Detected lcore 56 as core 9 on socket 0 00:07:03.808 EAL: Detected lcore 57 as core 10 on socket 0 00:07:03.808 EAL: Detected lcore 58 as core 11 on socket 0 00:07:03.808 EAL: Detected lcore 59 as core 12 on socket 0 00:07:03.808 EAL: Detected lcore 60 as core 13 on socket 0 00:07:03.808 EAL: Detected lcore 61 as core 16 on socket 0 00:07:03.808 EAL: Detected lcore 62 as core 17 on socket 0 00:07:03.808 EAL: Detected lcore 63 as core 18 on socket 0 00:07:03.808 EAL: Detected lcore 64 as core 19 on socket 0 00:07:03.808 EAL: Detected lcore 65 as core 20 on socket 0 00:07:03.808 EAL: Detected lcore 66 as core 21 on socket 0 00:07:03.808 EAL: Detected lcore 67 as core 25 on socket 0 00:07:03.808 EAL: Detected lcore 68 as core 26 on socket 0 00:07:03.808 EAL: Detected lcore 69 as core 27 on socket 0 00:07:03.808 EAL: Detected lcore 70 as core 28 on socket 0 00:07:03.808 EAL: Detected lcore 71 as core 29 on socket 0 00:07:03.808 EAL: Detected lcore 72 as core 0 on socket 1 00:07:03.808 EAL: Detected lcore 73 as core 1 on socket 1 00:07:03.808 EAL: Detected lcore 74 as core 2 on socket 1 00:07:03.808 EAL: Detected lcore 75 as core 3 on socket 1 00:07:03.808 EAL: Detected lcore 76 as core 4 on socket 1 00:07:03.808 EAL: Detected lcore 77 as core 5 on socket 1 00:07:03.808 EAL: Detected lcore 78 as core 6 on socket 1 00:07:03.808 EAL: Detected lcore 79 as core 8 on socket 1 00:07:03.808 EAL: Detected lcore 80 as core 10 on socket 1 00:07:03.808 EAL: Detected lcore 81 as core 11 on socket 1 00:07:03.808 EAL: Detected lcore 82 as core 12 on socket 1 00:07:03.808 EAL: Detected lcore 83 as core 13 on socket 1 00:07:03.808 EAL: Detected lcore 84 as core 16 on socket 1 00:07:03.808 EAL: Detected lcore 85 as core 17 on socket 1 00:07:03.808 EAL: Detected lcore 86 as core 18 on socket 1 00:07:03.808 EAL: Detected lcore 87 as core 19 on socket 1 00:07:03.808 EAL: Detected lcore 88 as core 20 on socket 1 00:07:03.808 EAL: Detected lcore 89 as core 21 on socket 1 00:07:03.808 EAL: Detected lcore 90 as core 24 on socket 1 00:07:03.808 EAL: Detected lcore 91 as core 25 on socket 1 00:07:03.808 EAL: Detected lcore 92 as core 26 on socket 1 00:07:03.808 EAL: Detected lcore 93 as core 27 on socket 1 00:07:03.808 EAL: Detected lcore 94 as core 28 on socket 1 00:07:03.808 EAL: Detected lcore 95 as core 29 on socket 1 00:07:03.808 EAL: Maximum logical cores by configuration: 128 00:07:03.808 EAL: Detected CPU lcores: 96 00:07:03.808 EAL: Detected NUMA nodes: 2 00:07:03.808 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:03.808 EAL: Detected shared linkage of DPDK 00:07:03.808 EAL: No shared files mode enabled, IPC will be disabled 00:07:04.068 EAL: Bus pci wants IOVA as 'DC' 00:07:04.068 EAL: Buses did not request a specific IOVA mode. 00:07:04.068 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:04.069 EAL: Selected IOVA mode 'VA' 00:07:04.069 EAL: Probing VFIO support... 00:07:04.069 EAL: IOMMU type 1 (Type 1) is supported 00:07:04.069 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:04.069 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:04.069 EAL: VFIO support initialized 00:07:04.069 EAL: Ask a virtual area of 0x2e000 bytes 00:07:04.069 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:04.069 EAL: Setting up physically contiguous memory... 00:07:04.069 EAL: Setting maximum number of open files to 524288 00:07:04.069 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:04.069 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:04.069 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:04.069 EAL: Ask a virtual area of 0x61000 bytes 00:07:04.069 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:04.069 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:04.069 EAL: Ask a virtual area of 0x400000000 bytes 00:07:04.069 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:04.069 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:04.069 EAL: Ask a virtual area of 0x61000 bytes 00:07:04.069 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:04.069 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:04.069 EAL: Ask a virtual area of 0x400000000 bytes 00:07:04.069 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:04.069 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:04.069 EAL: Ask a virtual area of 0x61000 bytes 00:07:04.069 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:04.069 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:04.069 EAL: Ask a virtual area of 0x400000000 bytes 00:07:04.069 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:04.069 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:04.069 EAL: Ask a virtual area of 0x61000 bytes 00:07:04.069 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:04.069 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:04.069 EAL: Ask a virtual area of 0x400000000 bytes 00:07:04.069 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:04.069 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:04.069 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:04.069 EAL: Ask a virtual area of 0x61000 bytes 00:07:04.069 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:04.069 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:04.069 EAL: Ask a virtual area of 0x400000000 bytes 00:07:04.069 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:04.069 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:04.069 EAL: Ask a virtual area of 0x61000 bytes 00:07:04.069 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:04.069 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:04.069 EAL: Ask a virtual area of 0x400000000 bytes 00:07:04.069 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:04.069 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:04.069 EAL: Ask a virtual area of 0x61000 bytes 00:07:04.069 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:04.069 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:04.069 EAL: Ask a virtual area of 0x400000000 bytes 00:07:04.069 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:04.069 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:04.069 EAL: Ask a virtual area of 0x61000 bytes 00:07:04.069 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:04.069 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:04.069 EAL: Ask a virtual area of 0x400000000 bytes 00:07:04.069 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:04.069 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:04.069 EAL: Hugepages will be freed exactly as allocated. 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: TSC frequency is ~2100000 KHz 00:07:04.069 EAL: Main lcore 0 is ready (tid=7fcb26f52a00;cpuset=[0]) 00:07:04.069 EAL: Trying to obtain current memory policy. 00:07:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.069 EAL: Restoring previous memory policy: 0 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was expanded by 2MB 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:04.069 EAL: Mem event callback 'spdk:(nil)' registered 00:07:04.069 00:07:04.069 00:07:04.069 CUnit - A unit testing framework for C - Version 2.1-3 00:07:04.069 http://cunit.sourceforge.net/ 00:07:04.069 00:07:04.069 00:07:04.069 Suite: components_suite 00:07:04.069 Test: vtophys_malloc_test ...passed 00:07:04.069 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.069 EAL: Restoring previous memory policy: 4 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was expanded by 4MB 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was shrunk by 4MB 00:07:04.069 EAL: Trying to obtain current memory policy. 00:07:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.069 EAL: Restoring previous memory policy: 4 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was expanded by 6MB 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was shrunk by 6MB 00:07:04.069 EAL: Trying to obtain current memory policy. 00:07:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.069 EAL: Restoring previous memory policy: 4 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was expanded by 10MB 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was shrunk by 10MB 00:07:04.069 EAL: Trying to obtain current memory policy. 00:07:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.069 EAL: Restoring previous memory policy: 4 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was expanded by 18MB 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was shrunk by 18MB 00:07:04.069 EAL: Trying to obtain current memory policy. 00:07:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.069 EAL: Restoring previous memory policy: 4 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was expanded by 34MB 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was shrunk by 34MB 00:07:04.069 EAL: Trying to obtain current memory policy. 00:07:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.069 EAL: Restoring previous memory policy: 4 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was expanded by 66MB 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was shrunk by 66MB 00:07:04.069 EAL: Trying to obtain current memory policy. 00:07:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.069 EAL: Restoring previous memory policy: 4 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was expanded by 130MB 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was shrunk by 130MB 00:07:04.069 EAL: Trying to obtain current memory policy. 00:07:04.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.069 EAL: Restoring previous memory policy: 4 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.069 EAL: request: mp_malloc_sync 00:07:04.069 EAL: No shared files mode enabled, IPC is disabled 00:07:04.069 EAL: Heap on socket 0 was expanded by 258MB 00:07:04.069 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.329 EAL: request: mp_malloc_sync 00:07:04.329 EAL: No shared files mode enabled, IPC is disabled 00:07:04.329 EAL: Heap on socket 0 was shrunk by 258MB 00:07:04.329 EAL: Trying to obtain current memory policy. 00:07:04.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.329 EAL: Restoring previous memory policy: 4 00:07:04.329 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.329 EAL: request: mp_malloc_sync 00:07:04.329 EAL: No shared files mode enabled, IPC is disabled 00:07:04.329 EAL: Heap on socket 0 was expanded by 514MB 00:07:04.329 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.589 EAL: request: mp_malloc_sync 00:07:04.589 EAL: No shared files mode enabled, IPC is disabled 00:07:04.589 EAL: Heap on socket 0 was shrunk by 514MB 00:07:04.589 EAL: Trying to obtain current memory policy. 00:07:04.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.589 EAL: Restoring previous memory policy: 4 00:07:04.589 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.589 EAL: request: mp_malloc_sync 00:07:04.589 EAL: No shared files mode enabled, IPC is disabled 00:07:04.589 EAL: Heap on socket 0 was expanded by 1026MB 00:07:04.848 EAL: Calling mem event callback 'spdk:(nil)' 00:07:05.108 EAL: request: mp_malloc_sync 00:07:05.108 EAL: No shared files mode enabled, IPC is disabled 00:07:05.108 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:05.108 passed 00:07:05.108 00:07:05.108 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.108 suites 1 1 n/a 0 0 00:07:05.108 tests 2 2 2 0 0 00:07:05.108 asserts 497 497 497 0 n/a 00:07:05.108 00:07:05.108 Elapsed time = 0.969 seconds 00:07:05.108 EAL: Calling mem event callback 'spdk:(nil)' 00:07:05.108 EAL: request: mp_malloc_sync 00:07:05.108 EAL: No shared files mode enabled, IPC is disabled 00:07:05.108 EAL: Heap on socket 0 was shrunk by 2MB 00:07:05.108 EAL: No shared files mode enabled, IPC is disabled 00:07:05.108 EAL: No shared files mode enabled, IPC is disabled 00:07:05.108 EAL: No shared files mode enabled, IPC is disabled 00:07:05.108 00:07:05.108 real 0m1.099s 00:07:05.108 user 0m0.640s 00:07:05.108 sys 0m0.430s 00:07:05.108 08:43:27 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.108 08:43:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:05.108 ************************************ 00:07:05.108 END TEST env_vtophys 00:07:05.108 ************************************ 00:07:05.108 08:43:27 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:07:05.108 08:43:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.108 08:43:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.108 08:43:27 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.108 ************************************ 00:07:05.108 START TEST env_pci 00:07:05.108 ************************************ 00:07:05.108 08:43:27 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:07:05.108 00:07:05.108 00:07:05.108 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.108 http://cunit.sourceforge.net/ 00:07:05.108 00:07:05.108 00:07:05.108 Suite: pci 00:07:05.108 Test: pci_hook ...[2024-11-06 08:43:27.962674] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 274496 has claimed it 00:07:05.108 EAL: Cannot find device (10000:00:01.0) 00:07:05.108 EAL: Failed to attach device on primary process 00:07:05.108 passed 00:07:05.108 00:07:05.108 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.108 suites 1 1 n/a 0 0 00:07:05.108 tests 1 1 1 0 0 00:07:05.108 asserts 25 25 25 0 n/a 00:07:05.108 00:07:05.108 Elapsed time = 0.029 seconds 00:07:05.108 00:07:05.108 real 0m0.050s 00:07:05.108 user 0m0.017s 00:07:05.108 sys 0m0.033s 00:07:05.108 08:43:27 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.108 08:43:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:05.108 ************************************ 00:07:05.108 END TEST env_pci 00:07:05.108 ************************************ 00:07:05.108 08:43:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:05.108 08:43:28 env -- env/env.sh@15 -- # uname 00:07:05.108 08:43:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:05.108 08:43:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:05.108 08:43:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:05.108 08:43:28 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:05.108 08:43:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.108 08:43:28 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.108 ************************************ 00:07:05.108 START TEST env_dpdk_post_init 00:07:05.108 ************************************ 00:07:05.108 08:43:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:05.108 EAL: Detected CPU lcores: 96 00:07:05.108 EAL: Detected NUMA nodes: 2 00:07:05.108 EAL: Detected shared linkage of DPDK 00:07:05.108 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:05.108 EAL: Selected IOVA mode 'VA' 00:07:05.108 EAL: VFIO support initialized 00:07:05.368 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:05.368 EAL: Using IOMMU type 1 (Type 1) 00:07:05.368 EAL: Ignore mapping IO port bar(1) 00:07:05.368 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:07:05.368 EAL: Ignore mapping IO port bar(1) 00:07:05.368 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:07:05.368 EAL: Ignore mapping IO port bar(1) 00:07:05.368 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:07:05.368 EAL: Ignore mapping IO port bar(1) 00:07:05.368 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:07:05.368 EAL: Ignore mapping IO port bar(1) 00:07:05.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:07:05.369 EAL: Ignore mapping IO port bar(1) 00:07:05.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:07:05.369 EAL: Ignore mapping IO port bar(1) 00:07:05.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:07:05.369 EAL: Ignore mapping IO port bar(1) 00:07:05.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:07:06.309 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:07:06.309 EAL: Ignore mapping IO port bar(1) 00:07:06.309 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:07:06.309 EAL: Ignore mapping IO port bar(1) 00:07:06.309 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:07:06.309 EAL: Ignore mapping IO port bar(1) 00:07:06.309 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:07:06.309 EAL: Ignore mapping IO port bar(1) 00:07:06.309 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:07:06.309 EAL: Ignore mapping IO port bar(1) 00:07:06.309 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:07:06.309 EAL: Ignore mapping IO port bar(1) 00:07:06.309 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:07:06.309 EAL: Ignore mapping IO port bar(1) 00:07:06.309 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:07:06.309 EAL: Ignore mapping IO port bar(1) 00:07:06.309 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:07:09.598 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:07:09.598 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:07:10.166 Starting DPDK initialization... 00:07:10.166 Starting SPDK post initialization... 00:07:10.166 SPDK NVMe probe 00:07:10.166 Attaching to 0000:5e:00.0 00:07:10.166 Attached to 0000:5e:00.0 00:07:10.166 Cleaning up... 00:07:10.166 00:07:10.166 real 0m4.858s 00:07:10.166 user 0m3.444s 00:07:10.166 sys 0m0.487s 00:07:10.166 08:43:32 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.166 08:43:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 ************************************ 00:07:10.166 END TEST env_dpdk_post_init 00:07:10.166 ************************************ 00:07:10.166 08:43:32 env -- env/env.sh@26 -- # uname 00:07:10.166 08:43:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:10.166 08:43:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:10.166 08:43:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.166 08:43:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.166 08:43:32 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 ************************************ 00:07:10.166 START TEST env_mem_callbacks 00:07:10.166 ************************************ 00:07:10.166 08:43:32 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:10.166 EAL: Detected CPU lcores: 96 00:07:10.166 EAL: Detected NUMA nodes: 2 00:07:10.166 EAL: Detected shared linkage of DPDK 00:07:10.166 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:10.166 EAL: Selected IOVA mode 'VA' 00:07:10.166 EAL: VFIO support initialized 00:07:10.166 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:10.166 00:07:10.166 00:07:10.166 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.166 http://cunit.sourceforge.net/ 00:07:10.166 00:07:10.166 00:07:10.166 Suite: memory 00:07:10.166 Test: test ... 00:07:10.166 register 0x200000200000 2097152 00:07:10.166 malloc 3145728 00:07:10.166 register 0x200000400000 4194304 00:07:10.166 buf 0x200000500000 len 3145728 PASSED 00:07:10.166 malloc 64 00:07:10.166 buf 0x2000004fff40 len 64 PASSED 00:07:10.166 malloc 4194304 00:07:10.166 register 0x200000800000 6291456 00:07:10.166 buf 0x200000a00000 len 4194304 PASSED 00:07:10.166 free 0x200000500000 3145728 00:07:10.166 free 0x2000004fff40 64 00:07:10.166 unregister 0x200000400000 4194304 PASSED 00:07:10.166 free 0x200000a00000 4194304 00:07:10.166 unregister 0x200000800000 6291456 PASSED 00:07:10.166 malloc 8388608 00:07:10.166 register 0x200000400000 10485760 00:07:10.166 buf 0x200000600000 len 8388608 PASSED 00:07:10.166 free 0x200000600000 8388608 00:07:10.166 unregister 0x200000400000 10485760 PASSED 00:07:10.166 passed 00:07:10.166 00:07:10.166 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.166 suites 1 1 n/a 0 0 00:07:10.166 tests 1 1 1 0 0 00:07:10.166 asserts 15 15 15 0 n/a 00:07:10.166 00:07:10.166 Elapsed time = 0.008 seconds 00:07:10.166 00:07:10.166 real 0m0.050s 00:07:10.166 user 0m0.017s 00:07:10.166 sys 0m0.033s 00:07:10.166 08:43:33 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.166 08:43:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 ************************************ 00:07:10.166 END TEST env_mem_callbacks 00:07:10.167 ************************************ 00:07:10.167 00:07:10.167 real 0m6.737s 00:07:10.167 user 0m4.502s 00:07:10.167 sys 0m1.315s 00:07:10.167 08:43:33 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.167 08:43:33 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.167 ************************************ 00:07:10.167 END TEST env 00:07:10.167 ************************************ 00:07:10.167 08:43:33 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:07:10.167 08:43:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.167 08:43:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.167 08:43:33 -- common/autotest_common.sh@10 -- # set +x 00:07:10.167 ************************************ 00:07:10.167 START TEST rpc 00:07:10.167 ************************************ 00:07:10.167 08:43:33 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:07:10.426 * Looking for test storage... 00:07:10.426 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:07:10.426 08:43:33 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:10.426 08:43:33 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:07:10.426 08:43:33 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:10.426 08:43:33 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:10.426 08:43:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.426 08:43:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.426 08:43:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.426 08:43:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.426 08:43:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.426 08:43:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.426 08:43:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.426 08:43:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.426 08:43:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.427 08:43:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.427 08:43:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.427 08:43:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:10.427 08:43:33 rpc -- scripts/common.sh@345 -- # : 1 00:07:10.427 08:43:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.427 08:43:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.427 08:43:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:10.427 08:43:33 rpc -- scripts/common.sh@353 -- # local d=1 00:07:10.427 08:43:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.427 08:43:33 rpc -- scripts/common.sh@355 -- # echo 1 00:07:10.427 08:43:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.427 08:43:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:10.427 08:43:33 rpc -- scripts/common.sh@353 -- # local d=2 00:07:10.427 08:43:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.427 08:43:33 rpc -- scripts/common.sh@355 -- # echo 2 00:07:10.427 08:43:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.427 08:43:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.427 08:43:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.427 08:43:33 rpc -- scripts/common.sh@368 -- # return 0 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:10.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.427 --rc genhtml_branch_coverage=1 00:07:10.427 --rc genhtml_function_coverage=1 00:07:10.427 --rc genhtml_legend=1 00:07:10.427 --rc geninfo_all_blocks=1 00:07:10.427 --rc geninfo_unexecuted_blocks=1 00:07:10.427 00:07:10.427 ' 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:10.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.427 --rc genhtml_branch_coverage=1 00:07:10.427 --rc genhtml_function_coverage=1 00:07:10.427 --rc genhtml_legend=1 00:07:10.427 --rc geninfo_all_blocks=1 00:07:10.427 --rc geninfo_unexecuted_blocks=1 00:07:10.427 00:07:10.427 ' 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:10.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.427 --rc genhtml_branch_coverage=1 00:07:10.427 --rc genhtml_function_coverage=1 00:07:10.427 --rc genhtml_legend=1 00:07:10.427 --rc geninfo_all_blocks=1 00:07:10.427 --rc geninfo_unexecuted_blocks=1 00:07:10.427 00:07:10.427 ' 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:10.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.427 --rc genhtml_branch_coverage=1 00:07:10.427 --rc genhtml_function_coverage=1 00:07:10.427 --rc genhtml_legend=1 00:07:10.427 --rc geninfo_all_blocks=1 00:07:10.427 --rc geninfo_unexecuted_blocks=1 00:07:10.427 00:07:10.427 ' 00:07:10.427 08:43:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=275562 00:07:10.427 08:43:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:10.427 08:43:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.427 08:43:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 275562 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@831 -- # '[' -z 275562 ']' 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.427 08:43:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.427 [2024-11-06 08:43:33.372167] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:10.427 [2024-11-06 08:43:33.372218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275562 ] 00:07:10.686 [2024-11-06 08:43:33.443214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.686 [2024-11-06 08:43:33.484741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:10.686 [2024-11-06 08:43:33.484776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 275562' to capture a snapshot of events at runtime. 00:07:10.686 [2024-11-06 08:43:33.484783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.686 [2024-11-06 08:43:33.484789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.686 [2024-11-06 08:43:33.484794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid275562 for offline analysis/debug. 00:07:10.686 [2024-11-06 08:43:33.485371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.686 08:43:33 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.686 08:43:33 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.686 08:43:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:07:10.686 08:43:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:07:10.686 08:43:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:10.686 08:43:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:10.686 08:43:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.686 08:43:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.687 08:43:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.946 ************************************ 00:07:10.946 START TEST rpc_integrity 00:07:10.946 ************************************ 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:10.946 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.946 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:10.946 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:10.946 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:10.946 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.946 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:10.946 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.946 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.946 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:10.946 { 00:07:10.946 "name": "Malloc0", 00:07:10.946 "aliases": [ 00:07:10.946 "256daa1a-f874-4e14-a252-6eb88071362f" 00:07:10.946 ], 00:07:10.946 "product_name": "Malloc disk", 00:07:10.946 "block_size": 512, 00:07:10.946 "num_blocks": 16384, 00:07:10.946 "uuid": "256daa1a-f874-4e14-a252-6eb88071362f", 00:07:10.946 "assigned_rate_limits": { 00:07:10.946 "rw_ios_per_sec": 0, 00:07:10.946 "rw_mbytes_per_sec": 0, 00:07:10.946 "r_mbytes_per_sec": 0, 00:07:10.946 "w_mbytes_per_sec": 0 00:07:10.946 }, 00:07:10.946 "claimed": false, 00:07:10.946 "zoned": false, 00:07:10.946 "supported_io_types": { 00:07:10.946 "read": true, 00:07:10.946 "write": true, 00:07:10.946 "unmap": true, 00:07:10.946 "flush": true, 00:07:10.946 "reset": true, 00:07:10.946 "nvme_admin": false, 00:07:10.946 "nvme_io": false, 00:07:10.946 "nvme_io_md": false, 00:07:10.946 "write_zeroes": true, 00:07:10.946 "zcopy": true, 00:07:10.946 "get_zone_info": false, 00:07:10.946 "zone_management": false, 00:07:10.946 "zone_append": false, 00:07:10.946 "compare": false, 00:07:10.946 "compare_and_write": false, 00:07:10.946 "abort": true, 00:07:10.946 "seek_hole": false, 00:07:10.947 "seek_data": false, 00:07:10.947 "copy": true, 00:07:10.947 "nvme_iov_md": false 00:07:10.947 }, 00:07:10.947 "memory_domains": [ 00:07:10.947 { 00:07:10.947 "dma_device_id": "system", 00:07:10.947 "dma_device_type": 1 00:07:10.947 }, 00:07:10.947 { 00:07:10.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.947 "dma_device_type": 2 00:07:10.947 } 00:07:10.947 ], 00:07:10.947 "driver_specific": {} 00:07:10.947 } 00:07:10.947 ]' 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.947 [2024-11-06 08:43:33.853010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:10.947 [2024-11-06 08:43:33.853037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.947 [2024-11-06 08:43:33.853050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1549c30 00:07:10.947 [2024-11-06 08:43:33.853057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.947 [2024-11-06 08:43:33.854106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.947 [2024-11-06 08:43:33.854124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:10.947 Passthru0 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:10.947 { 00:07:10.947 "name": "Malloc0", 00:07:10.947 "aliases": [ 00:07:10.947 "256daa1a-f874-4e14-a252-6eb88071362f" 00:07:10.947 ], 00:07:10.947 "product_name": "Malloc disk", 00:07:10.947 "block_size": 512, 00:07:10.947 "num_blocks": 16384, 00:07:10.947 "uuid": "256daa1a-f874-4e14-a252-6eb88071362f", 00:07:10.947 "assigned_rate_limits": { 00:07:10.947 "rw_ios_per_sec": 0, 00:07:10.947 "rw_mbytes_per_sec": 0, 00:07:10.947 "r_mbytes_per_sec": 0, 00:07:10.947 "w_mbytes_per_sec": 0 00:07:10.947 }, 00:07:10.947 "claimed": true, 00:07:10.947 "claim_type": "exclusive_write", 00:07:10.947 "zoned": false, 00:07:10.947 "supported_io_types": { 00:07:10.947 "read": true, 00:07:10.947 "write": true, 00:07:10.947 "unmap": true, 00:07:10.947 "flush": true, 00:07:10.947 "reset": true, 00:07:10.947 "nvme_admin": false, 00:07:10.947 "nvme_io": false, 00:07:10.947 "nvme_io_md": false, 00:07:10.947 "write_zeroes": true, 00:07:10.947 "zcopy": true, 00:07:10.947 "get_zone_info": false, 00:07:10.947 "zone_management": false, 00:07:10.947 "zone_append": false, 00:07:10.947 "compare": false, 00:07:10.947 "compare_and_write": false, 00:07:10.947 "abort": true, 00:07:10.947 "seek_hole": false, 00:07:10.947 "seek_data": false, 00:07:10.947 "copy": true, 00:07:10.947 "nvme_iov_md": false 00:07:10.947 }, 00:07:10.947 "memory_domains": [ 00:07:10.947 { 00:07:10.947 "dma_device_id": "system", 00:07:10.947 "dma_device_type": 1 00:07:10.947 }, 00:07:10.947 { 00:07:10.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.947 "dma_device_type": 2 00:07:10.947 } 00:07:10.947 ], 00:07:10.947 "driver_specific": {} 00:07:10.947 }, 00:07:10.947 { 00:07:10.947 "name": "Passthru0", 00:07:10.947 "aliases": [ 00:07:10.947 "7e62a51a-ed9f-522e-a2dd-8a905f86e6d4" 00:07:10.947 ], 00:07:10.947 "product_name": "passthru", 00:07:10.947 "block_size": 512, 00:07:10.947 "num_blocks": 16384, 00:07:10.947 "uuid": "7e62a51a-ed9f-522e-a2dd-8a905f86e6d4", 00:07:10.947 "assigned_rate_limits": { 00:07:10.947 "rw_ios_per_sec": 0, 00:07:10.947 "rw_mbytes_per_sec": 0, 00:07:10.947 "r_mbytes_per_sec": 0, 00:07:10.947 "w_mbytes_per_sec": 0 00:07:10.947 }, 00:07:10.947 "claimed": false, 00:07:10.947 "zoned": false, 00:07:10.947 "supported_io_types": { 00:07:10.947 "read": true, 00:07:10.947 "write": true, 00:07:10.947 "unmap": true, 00:07:10.947 "flush": true, 00:07:10.947 "reset": true, 00:07:10.947 "nvme_admin": false, 00:07:10.947 "nvme_io": false, 00:07:10.947 "nvme_io_md": false, 00:07:10.947 "write_zeroes": true, 00:07:10.947 "zcopy": true, 00:07:10.947 "get_zone_info": false, 00:07:10.947 "zone_management": false, 00:07:10.947 "zone_append": false, 00:07:10.947 "compare": false, 00:07:10.947 "compare_and_write": false, 00:07:10.947 "abort": true, 00:07:10.947 "seek_hole": false, 00:07:10.947 "seek_data": false, 00:07:10.947 "copy": true, 00:07:10.947 "nvme_iov_md": false 00:07:10.947 }, 00:07:10.947 "memory_domains": [ 00:07:10.947 { 00:07:10.947 "dma_device_id": "system", 00:07:10.947 "dma_device_type": 1 00:07:10.947 }, 00:07:10.947 { 00:07:10.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.947 "dma_device_type": 2 00:07:10.947 } 00:07:10.947 ], 00:07:10.947 "driver_specific": { 00:07:10.947 "passthru": { 00:07:10.947 "name": "Passthru0", 00:07:10.947 "base_bdev_name": "Malloc0" 00:07:10.947 } 00:07:10.947 } 00:07:10.947 } 00:07:10.947 ]' 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.947 08:43:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.947 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:11.207 08:43:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:11.207 08:43:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:11.207 00:07:11.207 real 0m0.274s 00:07:11.207 user 0m0.173s 00:07:11.207 sys 0m0.036s 00:07:11.207 08:43:34 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.207 08:43:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.207 ************************************ 00:07:11.207 END TEST rpc_integrity 00:07:11.207 ************************************ 00:07:11.207 08:43:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:11.207 08:43:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.207 08:43:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.207 08:43:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.207 ************************************ 00:07:11.207 START TEST rpc_plugins 00:07:11.207 ************************************ 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:11.207 { 00:07:11.207 "name": "Malloc1", 00:07:11.207 "aliases": [ 00:07:11.207 "af3d55cb-5b59-432c-a837-51b78f0dc682" 00:07:11.207 ], 00:07:11.207 "product_name": "Malloc disk", 00:07:11.207 "block_size": 4096, 00:07:11.207 "num_blocks": 256, 00:07:11.207 "uuid": "af3d55cb-5b59-432c-a837-51b78f0dc682", 00:07:11.207 "assigned_rate_limits": { 00:07:11.207 "rw_ios_per_sec": 0, 00:07:11.207 "rw_mbytes_per_sec": 0, 00:07:11.207 "r_mbytes_per_sec": 0, 00:07:11.207 "w_mbytes_per_sec": 0 00:07:11.207 }, 00:07:11.207 "claimed": false, 00:07:11.207 "zoned": false, 00:07:11.207 "supported_io_types": { 00:07:11.207 "read": true, 00:07:11.207 "write": true, 00:07:11.207 "unmap": true, 00:07:11.207 "flush": true, 00:07:11.207 "reset": true, 00:07:11.207 "nvme_admin": false, 00:07:11.207 "nvme_io": false, 00:07:11.207 "nvme_io_md": false, 00:07:11.207 "write_zeroes": true, 00:07:11.207 "zcopy": true, 00:07:11.207 "get_zone_info": false, 00:07:11.207 "zone_management": false, 00:07:11.207 "zone_append": false, 00:07:11.207 "compare": false, 00:07:11.207 "compare_and_write": false, 00:07:11.207 "abort": true, 00:07:11.207 "seek_hole": false, 00:07:11.207 "seek_data": false, 00:07:11.207 "copy": true, 00:07:11.207 "nvme_iov_md": false 00:07:11.207 }, 00:07:11.207 "memory_domains": [ 00:07:11.207 { 00:07:11.207 "dma_device_id": "system", 00:07:11.207 "dma_device_type": 1 00:07:11.207 }, 00:07:11.207 { 00:07:11.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.207 "dma_device_type": 2 00:07:11.207 } 00:07:11.207 ], 00:07:11.207 "driver_specific": {} 00:07:11.207 } 00:07:11.207 ]' 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:11.207 08:43:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:11.207 00:07:11.207 real 0m0.138s 00:07:11.207 user 0m0.087s 00:07:11.207 sys 0m0.016s 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.207 08:43:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:11.207 ************************************ 00:07:11.207 END TEST rpc_plugins 00:07:11.207 ************************************ 00:07:11.466 08:43:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:11.466 08:43:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.466 08:43:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.466 08:43:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.466 ************************************ 00:07:11.466 START TEST rpc_trace_cmd_test 00:07:11.466 ************************************ 00:07:11.466 08:43:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:11.466 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:11.466 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:11.466 08:43:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:11.467 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid275562", 00:07:11.467 "tpoint_group_mask": "0x8", 00:07:11.467 "iscsi_conn": { 00:07:11.467 "mask": "0x2", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "scsi": { 00:07:11.467 "mask": "0x4", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "bdev": { 00:07:11.467 "mask": "0x8", 00:07:11.467 "tpoint_mask": "0xffffffffffffffff" 00:07:11.467 }, 00:07:11.467 "nvmf_rdma": { 00:07:11.467 "mask": "0x10", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "nvmf_tcp": { 00:07:11.467 "mask": "0x20", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "ftl": { 00:07:11.467 "mask": "0x40", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "blobfs": { 00:07:11.467 "mask": "0x80", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "dsa": { 00:07:11.467 "mask": "0x200", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "thread": { 00:07:11.467 "mask": "0x400", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "nvme_pcie": { 00:07:11.467 "mask": "0x800", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "iaa": { 00:07:11.467 "mask": "0x1000", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "nvme_tcp": { 00:07:11.467 "mask": "0x2000", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "bdev_nvme": { 00:07:11.467 "mask": "0x4000", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "sock": { 00:07:11.467 "mask": "0x8000", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "blob": { 00:07:11.467 "mask": "0x10000", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "bdev_raid": { 00:07:11.467 "mask": "0x20000", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 }, 00:07:11.467 "scheduler": { 00:07:11.467 "mask": "0x40000", 00:07:11.467 "tpoint_mask": "0x0" 00:07:11.467 } 00:07:11.467 }' 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:11.467 00:07:11.467 real 0m0.197s 00:07:11.467 user 0m0.167s 00:07:11.467 sys 0m0.022s 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.467 08:43:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.467 ************************************ 00:07:11.467 END TEST rpc_trace_cmd_test 00:07:11.467 ************************************ 00:07:11.726 08:43:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:11.726 08:43:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:11.726 08:43:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:11.726 08:43:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.726 08:43:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.726 08:43:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.726 ************************************ 00:07:11.726 START TEST rpc_daemon_integrity 00:07:11.726 ************************************ 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:11.726 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:11.727 { 00:07:11.727 "name": "Malloc2", 00:07:11.727 "aliases": [ 00:07:11.727 "5aa8490d-9601-4323-a9b2-9e5e7173a2ac" 00:07:11.727 ], 00:07:11.727 "product_name": "Malloc disk", 00:07:11.727 "block_size": 512, 00:07:11.727 "num_blocks": 16384, 00:07:11.727 "uuid": "5aa8490d-9601-4323-a9b2-9e5e7173a2ac", 00:07:11.727 "assigned_rate_limits": { 00:07:11.727 "rw_ios_per_sec": 0, 00:07:11.727 "rw_mbytes_per_sec": 0, 00:07:11.727 "r_mbytes_per_sec": 0, 00:07:11.727 "w_mbytes_per_sec": 0 00:07:11.727 }, 00:07:11.727 "claimed": false, 00:07:11.727 "zoned": false, 00:07:11.727 "supported_io_types": { 00:07:11.727 "read": true, 00:07:11.727 "write": true, 00:07:11.727 "unmap": true, 00:07:11.727 "flush": true, 00:07:11.727 "reset": true, 00:07:11.727 "nvme_admin": false, 00:07:11.727 "nvme_io": false, 00:07:11.727 "nvme_io_md": false, 00:07:11.727 "write_zeroes": true, 00:07:11.727 "zcopy": true, 00:07:11.727 "get_zone_info": false, 00:07:11.727 "zone_management": false, 00:07:11.727 "zone_append": false, 00:07:11.727 "compare": false, 00:07:11.727 "compare_and_write": false, 00:07:11.727 "abort": true, 00:07:11.727 "seek_hole": false, 00:07:11.727 "seek_data": false, 00:07:11.727 "copy": true, 00:07:11.727 "nvme_iov_md": false 00:07:11.727 }, 00:07:11.727 "memory_domains": [ 00:07:11.727 { 00:07:11.727 "dma_device_id": "system", 00:07:11.727 "dma_device_type": 1 00:07:11.727 }, 00:07:11.727 { 00:07:11.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.727 "dma_device_type": 2 00:07:11.727 } 00:07:11.727 ], 00:07:11.727 "driver_specific": {} 00:07:11.727 } 00:07:11.727 ]' 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.727 [2024-11-06 08:43:34.671217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:11.727 [2024-11-06 08:43:34.671245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.727 [2024-11-06 08:43:34.671259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1676500 00:07:11.727 [2024-11-06 08:43:34.671265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.727 [2024-11-06 08:43:34.672293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.727 [2024-11-06 08:43:34.672312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:11.727 Passthru0 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:11.727 { 00:07:11.727 "name": "Malloc2", 00:07:11.727 "aliases": [ 00:07:11.727 "5aa8490d-9601-4323-a9b2-9e5e7173a2ac" 00:07:11.727 ], 00:07:11.727 "product_name": "Malloc disk", 00:07:11.727 "block_size": 512, 00:07:11.727 "num_blocks": 16384, 00:07:11.727 "uuid": "5aa8490d-9601-4323-a9b2-9e5e7173a2ac", 00:07:11.727 "assigned_rate_limits": { 00:07:11.727 "rw_ios_per_sec": 0, 00:07:11.727 "rw_mbytes_per_sec": 0, 00:07:11.727 "r_mbytes_per_sec": 0, 00:07:11.727 "w_mbytes_per_sec": 0 00:07:11.727 }, 00:07:11.727 "claimed": true, 00:07:11.727 "claim_type": "exclusive_write", 00:07:11.727 "zoned": false, 00:07:11.727 "supported_io_types": { 00:07:11.727 "read": true, 00:07:11.727 "write": true, 00:07:11.727 "unmap": true, 00:07:11.727 "flush": true, 00:07:11.727 "reset": true, 00:07:11.727 "nvme_admin": false, 00:07:11.727 "nvme_io": false, 00:07:11.727 "nvme_io_md": false, 00:07:11.727 "write_zeroes": true, 00:07:11.727 "zcopy": true, 00:07:11.727 "get_zone_info": false, 00:07:11.727 "zone_management": false, 00:07:11.727 "zone_append": false, 00:07:11.727 "compare": false, 00:07:11.727 "compare_and_write": false, 00:07:11.727 "abort": true, 00:07:11.727 "seek_hole": false, 00:07:11.727 "seek_data": false, 00:07:11.727 "copy": true, 00:07:11.727 "nvme_iov_md": false 00:07:11.727 }, 00:07:11.727 "memory_domains": [ 00:07:11.727 { 00:07:11.727 "dma_device_id": "system", 00:07:11.727 "dma_device_type": 1 00:07:11.727 }, 00:07:11.727 { 00:07:11.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.727 "dma_device_type": 2 00:07:11.727 } 00:07:11.727 ], 00:07:11.727 "driver_specific": {} 00:07:11.727 }, 00:07:11.727 { 00:07:11.727 "name": "Passthru0", 00:07:11.727 "aliases": [ 00:07:11.727 "eee58ff3-a8e8-5131-b74f-b98e8100aa70" 00:07:11.727 ], 00:07:11.727 "product_name": "passthru", 00:07:11.727 "block_size": 512, 00:07:11.727 "num_blocks": 16384, 00:07:11.727 "uuid": "eee58ff3-a8e8-5131-b74f-b98e8100aa70", 00:07:11.727 "assigned_rate_limits": { 00:07:11.727 "rw_ios_per_sec": 0, 00:07:11.727 "rw_mbytes_per_sec": 0, 00:07:11.727 "r_mbytes_per_sec": 0, 00:07:11.727 "w_mbytes_per_sec": 0 00:07:11.727 }, 00:07:11.727 "claimed": false, 00:07:11.727 "zoned": false, 00:07:11.727 "supported_io_types": { 00:07:11.727 "read": true, 00:07:11.727 "write": true, 00:07:11.727 "unmap": true, 00:07:11.727 "flush": true, 00:07:11.727 "reset": true, 00:07:11.727 "nvme_admin": false, 00:07:11.727 "nvme_io": false, 00:07:11.727 "nvme_io_md": false, 00:07:11.727 "write_zeroes": true, 00:07:11.727 "zcopy": true, 00:07:11.727 "get_zone_info": false, 00:07:11.727 "zone_management": false, 00:07:11.727 "zone_append": false, 00:07:11.727 "compare": false, 00:07:11.727 "compare_and_write": false, 00:07:11.727 "abort": true, 00:07:11.727 "seek_hole": false, 00:07:11.727 "seek_data": false, 00:07:11.727 "copy": true, 00:07:11.727 "nvme_iov_md": false 00:07:11.727 }, 00:07:11.727 "memory_domains": [ 00:07:11.727 { 00:07:11.727 "dma_device_id": "system", 00:07:11.727 "dma_device_type": 1 00:07:11.727 }, 00:07:11.727 { 00:07:11.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.727 "dma_device_type": 2 00:07:11.727 } 00:07:11.727 ], 00:07:11.727 "driver_specific": { 00:07:11.727 "passthru": { 00:07:11.727 "name": "Passthru0", 00:07:11.727 "base_bdev_name": "Malloc2" 00:07:11.727 } 00:07:11.727 } 00:07:11.727 } 00:07:11.727 ]' 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.727 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:11.987 00:07:11.987 real 0m0.265s 00:07:11.987 user 0m0.164s 00:07:11.987 sys 0m0.038s 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.987 08:43:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.987 ************************************ 00:07:11.987 END TEST rpc_daemon_integrity 00:07:11.987 ************************************ 00:07:11.987 08:43:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:11.987 08:43:34 rpc -- rpc/rpc.sh@84 -- # killprocess 275562 00:07:11.987 08:43:34 rpc -- common/autotest_common.sh@950 -- # '[' -z 275562 ']' 00:07:11.987 08:43:34 rpc -- common/autotest_common.sh@954 -- # kill -0 275562 00:07:11.987 08:43:34 rpc -- common/autotest_common.sh@955 -- # uname 00:07:11.987 08:43:34 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.987 08:43:34 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275562 00:07:11.987 08:43:34 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.987 08:43:34 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.987 08:43:34 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275562' 00:07:11.987 killing process with pid 275562 00:07:11.987 08:43:34 rpc -- common/autotest_common.sh@969 -- # kill 275562 00:07:11.988 08:43:34 rpc -- common/autotest_common.sh@974 -- # wait 275562 00:07:12.247 00:07:12.247 real 0m2.040s 00:07:12.247 user 0m2.582s 00:07:12.247 sys 0m0.674s 00:07:12.247 08:43:35 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.247 08:43:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.247 ************************************ 00:07:12.247 END TEST rpc 00:07:12.247 ************************************ 00:07:12.247 08:43:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:12.247 08:43:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.247 08:43:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.247 08:43:35 -- common/autotest_common.sh@10 -- # set +x 00:07:12.508 ************************************ 00:07:12.508 START TEST skip_rpc 00:07:12.508 ************************************ 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:12.508 * Looking for test storage... 00:07:12.508 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.508 08:43:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:12.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.508 --rc genhtml_branch_coverage=1 00:07:12.508 --rc genhtml_function_coverage=1 00:07:12.508 --rc genhtml_legend=1 00:07:12.508 --rc geninfo_all_blocks=1 00:07:12.508 --rc geninfo_unexecuted_blocks=1 00:07:12.508 00:07:12.508 ' 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:12.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.508 --rc genhtml_branch_coverage=1 00:07:12.508 --rc genhtml_function_coverage=1 00:07:12.508 --rc genhtml_legend=1 00:07:12.508 --rc geninfo_all_blocks=1 00:07:12.508 --rc geninfo_unexecuted_blocks=1 00:07:12.508 00:07:12.508 ' 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:12.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.508 --rc genhtml_branch_coverage=1 00:07:12.508 --rc genhtml_function_coverage=1 00:07:12.508 --rc genhtml_legend=1 00:07:12.508 --rc geninfo_all_blocks=1 00:07:12.508 --rc geninfo_unexecuted_blocks=1 00:07:12.508 00:07:12.508 ' 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:12.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.508 --rc genhtml_branch_coverage=1 00:07:12.508 --rc genhtml_function_coverage=1 00:07:12.508 --rc genhtml_legend=1 00:07:12.508 --rc geninfo_all_blocks=1 00:07:12.508 --rc geninfo_unexecuted_blocks=1 00:07:12.508 00:07:12.508 ' 00:07:12.508 08:43:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:12.508 08:43:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:07:12.508 08:43:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.508 08:43:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.508 ************************************ 00:07:12.508 START TEST skip_rpc 00:07:12.508 ************************************ 00:07:12.508 08:43:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:12.508 08:43:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=276197 00:07:12.508 08:43:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.508 08:43:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:12.508 08:43:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:12.768 [2024-11-06 08:43:35.522422] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:12.768 [2024-11-06 08:43:35.522458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276197 ] 00:07:12.768 [2024-11-06 08:43:35.596173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.768 [2024-11-06 08:43:35.639953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 276197 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 276197 ']' 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 276197 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 276197 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 276197' 00:07:18.044 killing process with pid 276197 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 276197 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 276197 00:07:18.044 00:07:18.044 real 0m5.368s 00:07:18.044 user 0m5.129s 00:07:18.044 sys 0m0.277s 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.044 08:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.044 ************************************ 00:07:18.044 END TEST skip_rpc 00:07:18.044 ************************************ 00:07:18.044 08:43:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:18.044 08:43:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.045 08:43:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.045 08:43:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.045 ************************************ 00:07:18.045 START TEST skip_rpc_with_json 00:07:18.045 ************************************ 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=277150 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 277150 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 277150 ']' 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.045 08:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.045 [2024-11-06 08:43:40.957499] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:18.045 [2024-11-06 08:43:40.957538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277150 ] 00:07:18.045 [2024-11-06 08:43:41.032136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.304 [2024-11-06 08:43:41.075285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.304 [2024-11-06 08:43:41.289820] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:18.304 request: 00:07:18.304 { 00:07:18.304 "trtype": "tcp", 00:07:18.304 "method": "nvmf_get_transports", 00:07:18.304 "req_id": 1 00:07:18.304 } 00:07:18.304 Got JSON-RPC error response 00:07:18.304 response: 00:07:18.304 { 00:07:18.304 "code": -19, 00:07:18.304 "message": "No such device" 00:07:18.304 } 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.304 [2024-11-06 08:43:41.301940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.304 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.564 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.564 08:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:18.564 { 00:07:18.564 "subsystems": [ 00:07:18.564 { 00:07:18.564 "subsystem": "fsdev", 00:07:18.564 "config": [ 00:07:18.564 { 00:07:18.564 "method": "fsdev_set_opts", 00:07:18.564 "params": { 00:07:18.564 "fsdev_io_pool_size": 65535, 00:07:18.564 "fsdev_io_cache_size": 256 00:07:18.564 } 00:07:18.564 } 00:07:18.564 ] 00:07:18.564 }, 00:07:18.564 { 00:07:18.564 "subsystem": "keyring", 00:07:18.564 "config": [] 00:07:18.564 }, 00:07:18.564 { 00:07:18.564 "subsystem": "iobuf", 00:07:18.564 "config": [ 00:07:18.564 { 00:07:18.564 "method": "iobuf_set_options", 00:07:18.564 "params": { 00:07:18.564 "small_pool_count": 8192, 00:07:18.565 "large_pool_count": 1024, 00:07:18.565 "small_bufsize": 8192, 00:07:18.565 "large_bufsize": 135168, 00:07:18.565 "enable_numa": false 00:07:18.565 } 00:07:18.565 } 00:07:18.565 ] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "sock", 00:07:18.565 "config": [ 00:07:18.565 { 00:07:18.565 "method": "sock_set_default_impl", 00:07:18.565 "params": { 00:07:18.565 "impl_name": "posix" 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "sock_impl_set_options", 00:07:18.565 "params": { 00:07:18.565 "impl_name": "ssl", 00:07:18.565 "recv_buf_size": 4096, 00:07:18.565 "send_buf_size": 4096, 00:07:18.565 "enable_recv_pipe": true, 00:07:18.565 "enable_quickack": false, 00:07:18.565 "enable_placement_id": 0, 00:07:18.565 "enable_zerocopy_send_server": true, 00:07:18.565 "enable_zerocopy_send_client": false, 00:07:18.565 "zerocopy_threshold": 0, 00:07:18.565 "tls_version": 0, 00:07:18.565 "enable_ktls": false 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "sock_impl_set_options", 00:07:18.565 "params": { 00:07:18.565 "impl_name": "posix", 00:07:18.565 "recv_buf_size": 2097152, 00:07:18.565 "send_buf_size": 2097152, 00:07:18.565 "enable_recv_pipe": true, 00:07:18.565 "enable_quickack": false, 00:07:18.565 "enable_placement_id": 0, 00:07:18.565 "enable_zerocopy_send_server": true, 00:07:18.565 "enable_zerocopy_send_client": false, 00:07:18.565 "zerocopy_threshold": 0, 00:07:18.565 "tls_version": 0, 00:07:18.565 "enable_ktls": false 00:07:18.565 } 00:07:18.565 } 00:07:18.565 ] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "vmd", 00:07:18.565 "config": [] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "accel", 00:07:18.565 "config": [ 00:07:18.565 { 00:07:18.565 "method": "accel_set_options", 00:07:18.565 "params": { 00:07:18.565 "small_cache_size": 128, 00:07:18.565 "large_cache_size": 16, 00:07:18.565 "task_count": 2048, 00:07:18.565 "sequence_count": 2048, 00:07:18.565 "buf_count": 2048 00:07:18.565 } 00:07:18.565 } 00:07:18.565 ] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "bdev", 00:07:18.565 "config": [ 00:07:18.565 { 00:07:18.565 "method": "bdev_set_options", 00:07:18.565 "params": { 00:07:18.565 "bdev_io_pool_size": 65535, 00:07:18.565 "bdev_io_cache_size": 256, 00:07:18.565 "bdev_auto_examine": true, 00:07:18.565 "iobuf_small_cache_size": 128, 00:07:18.565 "iobuf_large_cache_size": 16 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "bdev_raid_set_options", 00:07:18.565 "params": { 00:07:18.565 "process_window_size_kb": 1024, 00:07:18.565 "process_max_bandwidth_mb_sec": 0 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "bdev_iscsi_set_options", 00:07:18.565 "params": { 00:07:18.565 "timeout_sec": 30 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "bdev_nvme_set_options", 00:07:18.565 "params": { 00:07:18.565 "action_on_timeout": "none", 00:07:18.565 "timeout_us": 0, 00:07:18.565 "timeout_admin_us": 0, 00:07:18.565 "keep_alive_timeout_ms": 10000, 00:07:18.565 "arbitration_burst": 0, 00:07:18.565 "low_priority_weight": 0, 00:07:18.565 "medium_priority_weight": 0, 00:07:18.565 "high_priority_weight": 0, 00:07:18.565 "nvme_adminq_poll_period_us": 10000, 00:07:18.565 "nvme_ioq_poll_period_us": 0, 00:07:18.565 "io_queue_requests": 0, 00:07:18.565 "delay_cmd_submit": true, 00:07:18.565 "transport_retry_count": 4, 00:07:18.565 "bdev_retry_count": 3, 00:07:18.565 "transport_ack_timeout": 0, 00:07:18.565 "ctrlr_loss_timeout_sec": 0, 00:07:18.565 "reconnect_delay_sec": 0, 00:07:18.565 "fast_io_fail_timeout_sec": 0, 00:07:18.565 "disable_auto_failback": false, 00:07:18.565 "generate_uuids": false, 00:07:18.565 "transport_tos": 0, 00:07:18.565 "nvme_error_stat": false, 00:07:18.565 "rdma_srq_size": 0, 00:07:18.565 "io_path_stat": false, 00:07:18.565 "allow_accel_sequence": false, 00:07:18.565 "rdma_max_cq_size": 0, 00:07:18.565 "rdma_cm_event_timeout_ms": 0, 00:07:18.565 "dhchap_digests": [ 00:07:18.565 "sha256", 00:07:18.565 "sha384", 00:07:18.565 "sha512" 00:07:18.565 ], 00:07:18.565 "dhchap_dhgroups": [ 00:07:18.565 "null", 00:07:18.565 "ffdhe2048", 00:07:18.565 "ffdhe3072", 00:07:18.565 "ffdhe4096", 00:07:18.565 "ffdhe6144", 00:07:18.565 "ffdhe8192" 00:07:18.565 ] 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "bdev_nvme_set_hotplug", 00:07:18.565 "params": { 00:07:18.565 "period_us": 100000, 00:07:18.565 "enable": false 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "bdev_wait_for_examine" 00:07:18.565 } 00:07:18.565 ] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "scsi", 00:07:18.565 "config": null 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "scheduler", 00:07:18.565 "config": [ 00:07:18.565 { 00:07:18.565 "method": "framework_set_scheduler", 00:07:18.565 "params": { 00:07:18.565 "name": "static" 00:07:18.565 } 00:07:18.565 } 00:07:18.565 ] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "vhost_scsi", 00:07:18.565 "config": [] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "vhost_blk", 00:07:18.565 "config": [] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "ublk", 00:07:18.565 "config": [] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "nbd", 00:07:18.565 "config": [] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "nvmf", 00:07:18.565 "config": [ 00:07:18.565 { 00:07:18.565 "method": "nvmf_set_config", 00:07:18.565 "params": { 00:07:18.565 "discovery_filter": "match_any", 00:07:18.565 "admin_cmd_passthru": { 00:07:18.565 "identify_ctrlr": false 00:07:18.565 }, 00:07:18.565 "dhchap_digests": [ 00:07:18.565 "sha256", 00:07:18.565 "sha384", 00:07:18.565 "sha512" 00:07:18.565 ], 00:07:18.565 "dhchap_dhgroups": [ 00:07:18.565 "null", 00:07:18.565 "ffdhe2048", 00:07:18.565 "ffdhe3072", 00:07:18.565 "ffdhe4096", 00:07:18.565 "ffdhe6144", 00:07:18.565 "ffdhe8192" 00:07:18.565 ] 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "nvmf_set_max_subsystems", 00:07:18.565 "params": { 00:07:18.565 "max_subsystems": 1024 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "nvmf_set_crdt", 00:07:18.565 "params": { 00:07:18.565 "crdt1": 0, 00:07:18.565 "crdt2": 0, 00:07:18.565 "crdt3": 0 00:07:18.565 } 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "method": "nvmf_create_transport", 00:07:18.565 "params": { 00:07:18.565 "trtype": "TCP", 00:07:18.565 "max_queue_depth": 128, 00:07:18.565 "max_io_qpairs_per_ctrlr": 127, 00:07:18.565 "in_capsule_data_size": 4096, 00:07:18.565 "max_io_size": 131072, 00:07:18.565 "io_unit_size": 131072, 00:07:18.565 "max_aq_depth": 128, 00:07:18.565 "num_shared_buffers": 511, 00:07:18.565 "buf_cache_size": 4294967295, 00:07:18.565 "dif_insert_or_strip": false, 00:07:18.565 "zcopy": false, 00:07:18.565 "c2h_success": true, 00:07:18.565 "sock_priority": 0, 00:07:18.565 "abort_timeout_sec": 1, 00:07:18.565 "ack_timeout": 0, 00:07:18.565 "data_wr_pool_size": 0 00:07:18.565 } 00:07:18.565 } 00:07:18.565 ] 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "subsystem": "iscsi", 00:07:18.565 "config": [ 00:07:18.565 { 00:07:18.565 "method": "iscsi_set_options", 00:07:18.565 "params": { 00:07:18.565 "node_base": "iqn.2016-06.io.spdk", 00:07:18.565 "max_sessions": 128, 00:07:18.565 "max_connections_per_session": 2, 00:07:18.565 "max_queue_depth": 64, 00:07:18.565 "default_time2wait": 2, 00:07:18.565 "default_time2retain": 20, 00:07:18.565 "first_burst_length": 8192, 00:07:18.565 "immediate_data": true, 00:07:18.565 "allow_duplicated_isid": false, 00:07:18.565 "error_recovery_level": 0, 00:07:18.565 "nop_timeout": 60, 00:07:18.565 "nop_in_interval": 30, 00:07:18.565 "disable_chap": false, 00:07:18.565 "require_chap": false, 00:07:18.565 "mutual_chap": false, 00:07:18.565 "chap_group": 0, 00:07:18.565 "max_large_datain_per_connection": 64, 00:07:18.565 "max_r2t_per_connection": 4, 00:07:18.565 "pdu_pool_size": 36864, 00:07:18.565 "immediate_data_pool_size": 16384, 00:07:18.565 "data_out_pool_size": 2048 00:07:18.565 } 00:07:18.565 } 00:07:18.565 ] 00:07:18.565 } 00:07:18.565 ] 00:07:18.565 } 00:07:18.565 08:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:18.565 08:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 277150 00:07:18.565 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 277150 ']' 00:07:18.565 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 277150 00:07:18.565 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:18.566 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.566 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277150 00:07:18.566 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.566 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.566 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277150' 00:07:18.566 killing process with pid 277150 00:07:18.566 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 277150 00:07:18.566 08:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 277150 00:07:18.826 08:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=277252 00:07:18.826 08:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:18.826 08:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 277252 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 277252 ']' 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 277252 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277252 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277252' 00:07:24.105 killing process with pid 277252 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 277252 00:07:24.105 08:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 277252 00:07:24.364 08:43:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:07:24.364 08:43:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:07:24.364 00:07:24.364 real 0m6.283s 00:07:24.364 user 0m5.970s 00:07:24.364 sys 0m0.609s 00:07:24.364 08:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.364 08:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:24.364 ************************************ 00:07:24.364 END TEST skip_rpc_with_json 00:07:24.364 ************************************ 00:07:24.364 08:43:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:24.364 08:43:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.364 08:43:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.364 08:43:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.364 ************************************ 00:07:24.364 START TEST skip_rpc_with_delay 00:07:24.365 ************************************ 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:24.365 [2024-11-06 08:43:47.312046] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.365 00:07:24.365 real 0m0.065s 00:07:24.365 user 0m0.036s 00:07:24.365 sys 0m0.028s 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.365 08:43:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:24.365 ************************************ 00:07:24.365 END TEST skip_rpc_with_delay 00:07:24.365 ************************************ 00:07:24.365 08:43:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:24.365 08:43:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:24.365 08:43:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:24.365 08:43:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.365 08:43:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.365 08:43:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.624 ************************************ 00:07:24.624 START TEST exit_on_failed_rpc_init 00:07:24.624 ************************************ 00:07:24.624 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:24.624 08:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=278272 00:07:24.624 08:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 278272 00:07:24.624 08:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.625 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 278272 ']' 00:07:24.625 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.625 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.625 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.625 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.625 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:24.625 [2024-11-06 08:43:47.444349] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:24.625 [2024-11-06 08:43:47.444391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278272 ] 00:07:24.625 [2024-11-06 08:43:47.519929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.625 [2024-11-06 08:43:47.563754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.884 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.884 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:24.884 08:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:24.884 08:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:24.884 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:24.884 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:24.885 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.885 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.885 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.885 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.885 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.885 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.885 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.885 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:24.885 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:24.885 [2024-11-06 08:43:47.831414] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:24.885 [2024-11-06 08:43:47.831458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278366 ] 00:07:25.144 [2024-11-06 08:43:47.906035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.144 [2024-11-06 08:43:47.946379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.144 [2024-11-06 08:43:47.946430] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:25.144 [2024-11-06 08:43:47.946439] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:25.144 [2024-11-06 08:43:47.946445] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 278272 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 278272 ']' 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 278272 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.144 08:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 278272 00:07:25.144 08:43:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.145 08:43:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.145 08:43:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 278272' 00:07:25.145 killing process with pid 278272 00:07:25.145 08:43:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 278272 00:07:25.145 08:43:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 278272 00:07:25.404 00:07:25.404 real 0m0.949s 00:07:25.404 user 0m1.003s 00:07:25.404 sys 0m0.392s 00:07:25.404 08:43:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.404 08:43:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:25.404 ************************************ 00:07:25.404 END TEST exit_on_failed_rpc_init 00:07:25.404 ************************************ 00:07:25.404 08:43:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:25.404 00:07:25.404 real 0m13.117s 00:07:25.404 user 0m12.358s 00:07:25.404 sys 0m1.569s 00:07:25.404 08:43:48 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.404 08:43:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.404 ************************************ 00:07:25.404 END TEST skip_rpc 00:07:25.404 ************************************ 00:07:25.404 08:43:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:25.404 08:43:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.404 08:43:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.404 08:43:48 -- common/autotest_common.sh@10 -- # set +x 00:07:25.663 ************************************ 00:07:25.664 START TEST rpc_client 00:07:25.664 ************************************ 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:25.664 * Looking for test storage... 00:07:25.664 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.664 08:43:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:25.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.664 --rc genhtml_branch_coverage=1 00:07:25.664 --rc genhtml_function_coverage=1 00:07:25.664 --rc genhtml_legend=1 00:07:25.664 --rc geninfo_all_blocks=1 00:07:25.664 --rc geninfo_unexecuted_blocks=1 00:07:25.664 00:07:25.664 ' 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:25.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.664 --rc genhtml_branch_coverage=1 00:07:25.664 --rc genhtml_function_coverage=1 00:07:25.664 --rc genhtml_legend=1 00:07:25.664 --rc geninfo_all_blocks=1 00:07:25.664 --rc geninfo_unexecuted_blocks=1 00:07:25.664 00:07:25.664 ' 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:25.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.664 --rc genhtml_branch_coverage=1 00:07:25.664 --rc genhtml_function_coverage=1 00:07:25.664 --rc genhtml_legend=1 00:07:25.664 --rc geninfo_all_blocks=1 00:07:25.664 --rc geninfo_unexecuted_blocks=1 00:07:25.664 00:07:25.664 ' 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:25.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.664 --rc genhtml_branch_coverage=1 00:07:25.664 --rc genhtml_function_coverage=1 00:07:25.664 --rc genhtml_legend=1 00:07:25.664 --rc geninfo_all_blocks=1 00:07:25.664 --rc geninfo_unexecuted_blocks=1 00:07:25.664 00:07:25.664 ' 00:07:25.664 08:43:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:25.664 OK 00:07:25.664 08:43:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:25.664 00:07:25.664 real 0m0.200s 00:07:25.664 user 0m0.130s 00:07:25.664 sys 0m0.083s 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.664 08:43:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:25.664 ************************************ 00:07:25.664 END TEST rpc_client 00:07:25.664 ************************************ 00:07:25.935 08:43:48 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:25.935 08:43:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.935 08:43:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.935 08:43:48 -- common/autotest_common.sh@10 -- # set +x 00:07:25.935 ************************************ 00:07:25.935 START TEST json_config 00:07:25.935 ************************************ 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:25.935 08:43:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.935 08:43:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.935 08:43:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.935 08:43:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.935 08:43:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.935 08:43:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.935 08:43:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.935 08:43:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.935 08:43:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.935 08:43:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.935 08:43:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.935 08:43:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:25.935 08:43:48 json_config -- scripts/common.sh@345 -- # : 1 00:07:25.935 08:43:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.935 08:43:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.935 08:43:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:25.935 08:43:48 json_config -- scripts/common.sh@353 -- # local d=1 00:07:25.935 08:43:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.935 08:43:48 json_config -- scripts/common.sh@355 -- # echo 1 00:07:25.935 08:43:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.935 08:43:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:25.935 08:43:48 json_config -- scripts/common.sh@353 -- # local d=2 00:07:25.935 08:43:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.935 08:43:48 json_config -- scripts/common.sh@355 -- # echo 2 00:07:25.935 08:43:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.935 08:43:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.935 08:43:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.935 08:43:48 json_config -- scripts/common.sh@368 -- # return 0 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:25.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.935 --rc genhtml_branch_coverage=1 00:07:25.935 --rc genhtml_function_coverage=1 00:07:25.935 --rc genhtml_legend=1 00:07:25.935 --rc geninfo_all_blocks=1 00:07:25.935 --rc geninfo_unexecuted_blocks=1 00:07:25.935 00:07:25.935 ' 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:25.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.935 --rc genhtml_branch_coverage=1 00:07:25.935 --rc genhtml_function_coverage=1 00:07:25.935 --rc genhtml_legend=1 00:07:25.935 --rc geninfo_all_blocks=1 00:07:25.935 --rc geninfo_unexecuted_blocks=1 00:07:25.935 00:07:25.935 ' 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:25.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.935 --rc genhtml_branch_coverage=1 00:07:25.935 --rc genhtml_function_coverage=1 00:07:25.935 --rc genhtml_legend=1 00:07:25.935 --rc geninfo_all_blocks=1 00:07:25.935 --rc geninfo_unexecuted_blocks=1 00:07:25.935 00:07:25.935 ' 00:07:25.935 08:43:48 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:25.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.935 --rc genhtml_branch_coverage=1 00:07:25.935 --rc genhtml_function_coverage=1 00:07:25.935 --rc genhtml_legend=1 00:07:25.935 --rc geninfo_all_blocks=1 00:07:25.935 --rc geninfo_unexecuted_blocks=1 00:07:25.935 00:07:25.935 ' 00:07:25.935 08:43:48 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:25.935 08:43:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.935 08:43:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.935 08:43:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.935 08:43:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.935 08:43:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.935 08:43:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.935 08:43:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.935 08:43:48 json_config -- paths/export.sh@5 -- # export PATH 00:07:25.935 08:43:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@51 -- # : 0 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.935 08:43:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.936 08:43:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.936 08:43:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.936 08:43:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.936 08:43:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.936 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.936 08:43:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.936 08:43:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.936 08:43:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:25.936 INFO: JSON configuration test init 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.936 08:43:48 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:25.936 08:43:48 json_config -- json_config/common.sh@9 -- # local app=target 00:07:25.936 08:43:48 json_config -- json_config/common.sh@10 -- # shift 00:07:25.936 08:43:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:25.936 08:43:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:25.936 08:43:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:25.936 08:43:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.936 08:43:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.936 08:43:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=278714 00:07:25.936 08:43:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:25.936 Waiting for target to run... 00:07:25.936 08:43:48 json_config -- json_config/common.sh@25 -- # waitforlisten 278714 /var/tmp/spdk_tgt.sock 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@831 -- # '[' -z 278714 ']' 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.936 08:43:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:25.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.936 08:43:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.209 [2024-11-06 08:43:48.977223] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:26.209 [2024-11-06 08:43:48.977280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278714 ] 00:07:26.476 [2024-11-06 08:43:49.433650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.810 [2024-11-06 08:43:49.491444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.810 08:43:49 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.810 08:43:49 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:26.810 08:43:49 json_config -- json_config/common.sh@26 -- # echo '' 00:07:26.810 00:07:26.810 08:43:49 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:26.810 08:43:49 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:26.810 08:43:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.810 08:43:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.810 08:43:49 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:26.810 08:43:49 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:27.098 08:43:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.098 08:43:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.098 08:43:49 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:27.098 08:43:49 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:27.098 08:43:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:30.526 08:43:52 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:30.526 08:43:52 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:30.526 08:43:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.526 08:43:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:30.526 08:43:52 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:30.526 08:43:52 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:30.526 08:43:52 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:30.526 08:43:52 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:30.526 08:43:52 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:30.526 08:43:52 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:30.526 08:43:52 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:30.526 08:43:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@54 -- # sort 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:30.526 08:43:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.526 08:43:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:30.526 08:43:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.526 08:43:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:07:30.526 08:43:53 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:07:30.526 08:43:53 json_config -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:30.526 08:43:53 json_config -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.526 08:43:53 json_config -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:30.526 08:43:53 json_config -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:30.526 08:43:53 json_config -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:30.526 08:43:53 json_config -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.526 08:43:53 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:07:30.526 08:43:53 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.526 08:43:53 json_config -- nvmf/common.sh@440 -- # [[ phy-fallback != virt ]] 00:07:30.526 08:43:53 json_config -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:30.526 08:43:53 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.526 08:43:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@320 -- # e810=() 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@321 -- # x722=() 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@322 -- # mlx=() 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:35.802 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:35.802 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:35.802 Found net devices under 0000:da:00.0: mlx_0_0 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:35.802 Found net devices under 0000:da:00.1: mlx_0_1 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@440 -- # is_hw=yes 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@446 -- # rdma_device_init 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@62 -- # uname 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:35.802 08:43:58 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@109 -- # continue 2 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@109 -- # continue 2 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:35.803 08:43:58 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@78 -- # ip= 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_0 up 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:36.063 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:36.063 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:07:36.063 altname enp218s0f0np0 00:07:36.063 altname ens818f0np0 00:07:36.063 inet 192.168.100.8/24 scope global mlx_0_0 00:07:36.063 valid_lft forever preferred_lft forever 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@78 -- # ip= 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_1 up 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:07:36.063 08:43:58 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:36.064 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:36.064 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:07:36.064 altname enp218s0f1np1 00:07:36.064 altname ens818f1np1 00:07:36.064 inet 192.168.100.9/24 scope global mlx_0_1 00:07:36.064 valid_lft forever preferred_lft forever 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@448 -- # return 0 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@109 -- # continue 2 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@109 -- # continue 2 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:36.064 192.168.100.9' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:36.064 192.168.100.9' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@483 -- # head -n 1 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:36.064 192.168.100.9' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@484 -- # tail -n +2 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@484 -- # head -n 1 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:36.064 08:43:58 json_config -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:36.064 08:43:58 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:07:36.064 08:43:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:36.064 08:43:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:36.324 MallocForNvmf0 00:07:36.324 08:43:59 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:36.324 08:43:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:36.324 MallocForNvmf1 00:07:36.583 08:43:59 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:07:36.583 08:43:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:07:36.583 [2024-11-06 08:43:59.505910] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:36.583 [2024-11-06 08:43:59.567943] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c556e0/0x1b29ec0) succeed. 00:07:36.583 [2024-11-06 08:43:59.579098] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c558b0/0x1ba9b80) succeed. 00:07:36.846 08:43:59 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:36.846 08:43:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:36.846 08:43:59 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:36.846 08:43:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:37.108 08:44:00 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:37.108 08:44:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:37.368 08:44:00 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:37.368 08:44:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:37.628 [2024-11-06 08:44:00.400700] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:37.628 08:44:00 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:37.628 08:44:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.628 08:44:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.628 08:44:00 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:37.628 08:44:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.628 08:44:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.628 08:44:00 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:37.628 08:44:00 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:37.628 08:44:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:37.887 MallocBdevForConfigChangeCheck 00:07:37.887 08:44:00 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:37.887 08:44:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.887 08:44:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.887 08:44:00 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:37.887 08:44:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:38.147 08:44:01 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:38.147 INFO: shutting down applications... 00:07:38.147 08:44:01 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:38.147 08:44:01 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:38.147 08:44:01 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:38.147 08:44:01 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:40.684 Calling clear_iscsi_subsystem 00:07:40.684 Calling clear_nvmf_subsystem 00:07:40.684 Calling clear_nbd_subsystem 00:07:40.684 Calling clear_ublk_subsystem 00:07:40.684 Calling clear_vhost_blk_subsystem 00:07:40.684 Calling clear_vhost_scsi_subsystem 00:07:40.684 Calling clear_bdev_subsystem 00:07:40.684 08:44:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:07:40.684 08:44:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:40.684 08:44:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:40.684 08:44:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:40.684 08:44:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:40.684 08:44:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:40.684 08:44:03 json_config -- json_config/json_config.sh@352 -- # break 00:07:40.684 08:44:03 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:40.684 08:44:03 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:40.684 08:44:03 json_config -- json_config/common.sh@31 -- # local app=target 00:07:40.684 08:44:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:40.684 08:44:03 json_config -- json_config/common.sh@35 -- # [[ -n 278714 ]] 00:07:40.684 08:44:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 278714 00:07:40.684 08:44:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:40.684 08:44:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:40.684 08:44:03 json_config -- json_config/common.sh@41 -- # kill -0 278714 00:07:40.684 08:44:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:41.254 08:44:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:41.254 08:44:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.254 08:44:04 json_config -- json_config/common.sh@41 -- # kill -0 278714 00:07:41.254 08:44:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:41.254 08:44:04 json_config -- json_config/common.sh@43 -- # break 00:07:41.254 08:44:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:41.254 08:44:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:41.254 SPDK target shutdown done 00:07:41.254 08:44:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:41.254 INFO: relaunching applications... 00:07:41.254 08:44:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.254 08:44:04 json_config -- json_config/common.sh@9 -- # local app=target 00:07:41.254 08:44:04 json_config -- json_config/common.sh@10 -- # shift 00:07:41.254 08:44:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:41.254 08:44:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:41.254 08:44:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:41.254 08:44:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:41.254 08:44:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:41.254 08:44:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=283471 00:07:41.254 08:44:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:41.254 Waiting for target to run... 00:07:41.254 08:44:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.254 08:44:04 json_config -- json_config/common.sh@25 -- # waitforlisten 283471 /var/tmp/spdk_tgt.sock 00:07:41.254 08:44:04 json_config -- common/autotest_common.sh@831 -- # '[' -z 283471 ']' 00:07:41.254 08:44:04 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:41.254 08:44:04 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.254 08:44:04 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:41.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:41.254 08:44:04 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.254 08:44:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:41.254 [2024-11-06 08:44:04.069604] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:41.254 [2024-11-06 08:44:04.069667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283471 ] 00:07:41.823 [2024-11-06 08:44:04.534798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.823 [2024-11-06 08:44:04.592794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.115 [2024-11-06 08:44:07.648773] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfb9970/0xfc59f0) succeed. 00:07:45.115 [2024-11-06 08:44:07.659947] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfbbb60/0x1045680) succeed. 00:07:45.115 [2024-11-06 08:44:07.710085] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:45.375 08:44:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.375 08:44:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:45.375 08:44:08 json_config -- json_config/common.sh@26 -- # echo '' 00:07:45.375 00:07:45.375 08:44:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:45.375 08:44:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:45.375 INFO: Checking if target configuration is the same... 00:07:45.375 08:44:08 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:45.375 08:44:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:45.375 08:44:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:45.375 + '[' 2 -ne 2 ']' 00:07:45.375 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:45.375 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:45.375 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:45.375 +++ basename /dev/fd/62 00:07:45.375 ++ mktemp /tmp/62.XXX 00:07:45.375 + tmp_file_1=/tmp/62.udW 00:07:45.375 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:45.375 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:45.375 + tmp_file_2=/tmp/spdk_tgt_config.json.cfX 00:07:45.375 + ret=0 00:07:45.375 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:45.943 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:45.944 + diff -u /tmp/62.udW /tmp/spdk_tgt_config.json.cfX 00:07:45.944 + echo 'INFO: JSON config files are the same' 00:07:45.944 INFO: JSON config files are the same 00:07:45.944 + rm /tmp/62.udW /tmp/spdk_tgt_config.json.cfX 00:07:45.944 + exit 0 00:07:45.944 08:44:08 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:45.944 08:44:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:45.944 INFO: changing configuration and checking if this can be detected... 00:07:45.944 08:44:08 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:45.944 08:44:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:45.944 08:44:08 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:45.944 08:44:08 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:45.944 08:44:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:45.944 + '[' 2 -ne 2 ']' 00:07:45.944 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:45.944 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:45.944 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:45.944 +++ basename /dev/fd/62 00:07:45.944 ++ mktemp /tmp/62.XXX 00:07:45.944 + tmp_file_1=/tmp/62.XnL 00:07:45.944 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:45.944 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:45.944 + tmp_file_2=/tmp/spdk_tgt_config.json.gZB 00:07:45.944 + ret=0 00:07:45.944 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:46.514 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:46.514 + diff -u /tmp/62.XnL /tmp/spdk_tgt_config.json.gZB 00:07:46.514 + ret=1 00:07:46.514 + echo '=== Start of file: /tmp/62.XnL ===' 00:07:46.514 + cat /tmp/62.XnL 00:07:46.514 + echo '=== End of file: /tmp/62.XnL ===' 00:07:46.514 + echo '' 00:07:46.514 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gZB ===' 00:07:46.514 + cat /tmp/spdk_tgt_config.json.gZB 00:07:46.514 + echo '=== End of file: /tmp/spdk_tgt_config.json.gZB ===' 00:07:46.514 + echo '' 00:07:46.514 + rm /tmp/62.XnL /tmp/spdk_tgt_config.json.gZB 00:07:46.514 + exit 1 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:46.514 INFO: configuration change detected. 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 283471 ]] 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.514 08:44:09 json_config -- json_config/json_config.sh@330 -- # killprocess 283471 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@950 -- # '[' -z 283471 ']' 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@954 -- # kill -0 283471 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@955 -- # uname 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 283471 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 283471' 00:07:46.514 killing process with pid 283471 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@969 -- # kill 283471 00:07:46.514 08:44:09 json_config -- common/autotest_common.sh@974 -- # wait 283471 00:07:49.055 08:44:11 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:49.055 08:44:11 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:49.055 08:44:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.055 08:44:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:49.055 08:44:11 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:49.055 08:44:11 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:49.055 INFO: Success 00:07:49.055 08:44:11 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:07:49.055 08:44:11 json_config -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:49.055 08:44:11 json_config -- nvmf/common.sh@121 -- # sync 00:07:49.055 08:44:11 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:07:49.055 08:44:11 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:07:49.055 08:44:11 json_config -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:07:49.055 08:44:11 json_config -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:49.055 08:44:11 json_config -- nvmf/common.sh@521 -- # [[ '' == \t\c\p ]] 00:07:49.055 00:07:49.055 real 0m22.786s 00:07:49.055 user 0m24.817s 00:07:49.055 sys 0m7.099s 00:07:49.055 08:44:11 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.055 08:44:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:49.055 ************************************ 00:07:49.055 END TEST json_config 00:07:49.055 ************************************ 00:07:49.055 08:44:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:49.055 08:44:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.055 08:44:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.055 08:44:11 -- common/autotest_common.sh@10 -- # set +x 00:07:49.055 ************************************ 00:07:49.055 START TEST json_config_extra_key 00:07:49.055 ************************************ 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.055 08:44:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:49.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.055 --rc genhtml_branch_coverage=1 00:07:49.055 --rc genhtml_function_coverage=1 00:07:49.055 --rc genhtml_legend=1 00:07:49.055 --rc geninfo_all_blocks=1 00:07:49.055 --rc geninfo_unexecuted_blocks=1 00:07:49.055 00:07:49.055 ' 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:49.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.055 --rc genhtml_branch_coverage=1 00:07:49.055 --rc genhtml_function_coverage=1 00:07:49.055 --rc genhtml_legend=1 00:07:49.055 --rc geninfo_all_blocks=1 00:07:49.055 --rc geninfo_unexecuted_blocks=1 00:07:49.055 00:07:49.055 ' 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:49.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.055 --rc genhtml_branch_coverage=1 00:07:49.055 --rc genhtml_function_coverage=1 00:07:49.055 --rc genhtml_legend=1 00:07:49.055 --rc geninfo_all_blocks=1 00:07:49.055 --rc geninfo_unexecuted_blocks=1 00:07:49.055 00:07:49.055 ' 00:07:49.055 08:44:11 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:49.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.055 --rc genhtml_branch_coverage=1 00:07:49.055 --rc genhtml_function_coverage=1 00:07:49.055 --rc genhtml_legend=1 00:07:49.055 --rc geninfo_all_blocks=1 00:07:49.055 --rc geninfo_unexecuted_blocks=1 00:07:49.055 00:07:49.055 ' 00:07:49.055 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:49.055 08:44:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:49.056 08:44:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.056 08:44:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.056 08:44:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.056 08:44:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.056 08:44:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.056 08:44:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.056 08:44:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.056 08:44:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:49.056 08:44:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.056 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.056 08:44:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:49.056 INFO: launching applications... 00:07:49.056 08:44:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=284782 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:49.056 Waiting for target to run... 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 284782 /var/tmp/spdk_tgt.sock 00:07:49.056 08:44:11 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 284782 ']' 00:07:49.056 08:44:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:49.056 08:44:11 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:49.056 08:44:11 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.056 08:44:11 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:49.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:49.056 08:44:11 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.056 08:44:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:49.056 [2024-11-06 08:44:11.805340] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:49.056 [2024-11-06 08:44:11.805382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284782 ] 00:07:49.316 [2024-11-06 08:44:12.090956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.316 [2024-11-06 08:44:12.123375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.883 08:44:12 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.883 08:44:12 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:49.883 08:44:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:49.883 00:07:49.883 08:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:49.883 INFO: shutting down applications... 00:07:49.883 08:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:49.883 08:44:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:49.883 08:44:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:49.883 08:44:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 284782 ]] 00:07:49.883 08:44:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 284782 00:07:49.883 08:44:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:49.883 08:44:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:49.883 08:44:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 284782 00:07:49.883 08:44:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:50.142 08:44:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:50.142 08:44:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:50.142 08:44:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 284782 00:07:50.142 08:44:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:50.142 08:44:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:50.142 08:44:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:50.142 08:44:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:50.142 SPDK target shutdown done 00:07:50.142 08:44:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:50.142 Success 00:07:50.142 00:07:50.142 real 0m1.565s 00:07:50.142 user 0m1.348s 00:07:50.142 sys 0m0.391s 00:07:50.142 08:44:13 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.142 08:44:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:50.142 ************************************ 00:07:50.142 END TEST json_config_extra_key 00:07:50.142 ************************************ 00:07:50.402 08:44:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:50.402 08:44:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.402 08:44:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.402 08:44:13 -- common/autotest_common.sh@10 -- # set +x 00:07:50.402 ************************************ 00:07:50.402 START TEST alias_rpc 00:07:50.402 ************************************ 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:50.402 * Looking for test storage... 00:07:50.402 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.402 08:44:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:50.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.402 --rc genhtml_branch_coverage=1 00:07:50.402 --rc genhtml_function_coverage=1 00:07:50.402 --rc genhtml_legend=1 00:07:50.402 --rc geninfo_all_blocks=1 00:07:50.402 --rc geninfo_unexecuted_blocks=1 00:07:50.402 00:07:50.402 ' 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:50.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.402 --rc genhtml_branch_coverage=1 00:07:50.402 --rc genhtml_function_coverage=1 00:07:50.402 --rc genhtml_legend=1 00:07:50.402 --rc geninfo_all_blocks=1 00:07:50.402 --rc geninfo_unexecuted_blocks=1 00:07:50.402 00:07:50.402 ' 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:50.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.402 --rc genhtml_branch_coverage=1 00:07:50.402 --rc genhtml_function_coverage=1 00:07:50.402 --rc genhtml_legend=1 00:07:50.402 --rc geninfo_all_blocks=1 00:07:50.402 --rc geninfo_unexecuted_blocks=1 00:07:50.402 00:07:50.402 ' 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:50.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.402 --rc genhtml_branch_coverage=1 00:07:50.402 --rc genhtml_function_coverage=1 00:07:50.402 --rc genhtml_legend=1 00:07:50.402 --rc geninfo_all_blocks=1 00:07:50.402 --rc geninfo_unexecuted_blocks=1 00:07:50.402 00:07:50.402 ' 00:07:50.402 08:44:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:50.402 08:44:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=285087 00:07:50.402 08:44:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 285087 00:07:50.402 08:44:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 285087 ']' 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.402 08:44:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.661 [2024-11-06 08:44:13.429921] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:50.661 [2024-11-06 08:44:13.429970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285087 ] 00:07:50.662 [2024-11-06 08:44:13.495004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.662 [2024-11-06 08:44:13.534786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.921 08:44:13 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.921 08:44:13 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:50.921 08:44:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:51.180 08:44:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 285087 00:07:51.180 08:44:13 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 285087 ']' 00:07:51.180 08:44:13 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 285087 00:07:51.180 08:44:13 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:51.180 08:44:13 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.180 08:44:13 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 285087 00:07:51.180 08:44:14 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.180 08:44:14 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.180 08:44:14 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 285087' 00:07:51.180 killing process with pid 285087 00:07:51.180 08:44:14 alias_rpc -- common/autotest_common.sh@969 -- # kill 285087 00:07:51.180 08:44:14 alias_rpc -- common/autotest_common.sh@974 -- # wait 285087 00:07:51.440 00:07:51.440 real 0m1.114s 00:07:51.440 user 0m1.158s 00:07:51.440 sys 0m0.396s 00:07:51.440 08:44:14 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.440 08:44:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.440 ************************************ 00:07:51.440 END TEST alias_rpc 00:07:51.440 ************************************ 00:07:51.440 08:44:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:51.440 08:44:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:51.440 08:44:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.440 08:44:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.440 08:44:14 -- common/autotest_common.sh@10 -- # set +x 00:07:51.440 ************************************ 00:07:51.440 START TEST spdkcli_tcp 00:07:51.440 ************************************ 00:07:51.440 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:51.700 * Looking for test storage... 00:07:51.700 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.700 08:44:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:51.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.700 --rc genhtml_branch_coverage=1 00:07:51.700 --rc genhtml_function_coverage=1 00:07:51.700 --rc genhtml_legend=1 00:07:51.700 --rc geninfo_all_blocks=1 00:07:51.700 --rc geninfo_unexecuted_blocks=1 00:07:51.700 00:07:51.700 ' 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:51.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.700 --rc genhtml_branch_coverage=1 00:07:51.700 --rc genhtml_function_coverage=1 00:07:51.700 --rc genhtml_legend=1 00:07:51.700 --rc geninfo_all_blocks=1 00:07:51.700 --rc geninfo_unexecuted_blocks=1 00:07:51.700 00:07:51.700 ' 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:51.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.700 --rc genhtml_branch_coverage=1 00:07:51.700 --rc genhtml_function_coverage=1 00:07:51.700 --rc genhtml_legend=1 00:07:51.700 --rc geninfo_all_blocks=1 00:07:51.700 --rc geninfo_unexecuted_blocks=1 00:07:51.700 00:07:51.700 ' 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:51.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.700 --rc genhtml_branch_coverage=1 00:07:51.700 --rc genhtml_function_coverage=1 00:07:51.700 --rc genhtml_legend=1 00:07:51.700 --rc geninfo_all_blocks=1 00:07:51.700 --rc geninfo_unexecuted_blocks=1 00:07:51.700 00:07:51.700 ' 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=285359 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 285359 00:07:51.700 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 285359 ']' 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.700 08:44:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.700 [2024-11-06 08:44:14.618456] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:51.700 [2024-11-06 08:44:14.618498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285359 ] 00:07:51.700 [2024-11-06 08:44:14.692506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.960 [2024-11-06 08:44:14.736857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.960 [2024-11-06 08:44:14.736861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.960 08:44:14 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.960 08:44:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:51.960 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=285503 00:07:51.960 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:51.960 08:44:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:52.219 [ 00:07:52.219 "bdev_malloc_delete", 00:07:52.219 "bdev_malloc_create", 00:07:52.219 "bdev_null_resize", 00:07:52.219 "bdev_null_delete", 00:07:52.219 "bdev_null_create", 00:07:52.219 "bdev_nvme_cuse_unregister", 00:07:52.219 "bdev_nvme_cuse_register", 00:07:52.219 "bdev_opal_new_user", 00:07:52.219 "bdev_opal_set_lock_state", 00:07:52.219 "bdev_opal_delete", 00:07:52.219 "bdev_opal_get_info", 00:07:52.219 "bdev_opal_create", 00:07:52.219 "bdev_nvme_opal_revert", 00:07:52.219 "bdev_nvme_opal_init", 00:07:52.219 "bdev_nvme_send_cmd", 00:07:52.219 "bdev_nvme_set_keys", 00:07:52.219 "bdev_nvme_get_path_iostat", 00:07:52.219 "bdev_nvme_get_mdns_discovery_info", 00:07:52.219 "bdev_nvme_stop_mdns_discovery", 00:07:52.219 "bdev_nvme_start_mdns_discovery", 00:07:52.219 "bdev_nvme_set_multipath_policy", 00:07:52.219 "bdev_nvme_set_preferred_path", 00:07:52.219 "bdev_nvme_get_io_paths", 00:07:52.219 "bdev_nvme_remove_error_injection", 00:07:52.219 "bdev_nvme_add_error_injection", 00:07:52.219 "bdev_nvme_get_discovery_info", 00:07:52.219 "bdev_nvme_stop_discovery", 00:07:52.219 "bdev_nvme_start_discovery", 00:07:52.219 "bdev_nvme_get_controller_health_info", 00:07:52.219 "bdev_nvme_disable_controller", 00:07:52.219 "bdev_nvme_enable_controller", 00:07:52.220 "bdev_nvme_reset_controller", 00:07:52.220 "bdev_nvme_get_transport_statistics", 00:07:52.220 "bdev_nvme_apply_firmware", 00:07:52.220 "bdev_nvme_detach_controller", 00:07:52.220 "bdev_nvme_get_controllers", 00:07:52.220 "bdev_nvme_attach_controller", 00:07:52.220 "bdev_nvme_set_hotplug", 00:07:52.220 "bdev_nvme_set_options", 00:07:52.220 "bdev_passthru_delete", 00:07:52.220 "bdev_passthru_create", 00:07:52.220 "bdev_lvol_set_parent_bdev", 00:07:52.220 "bdev_lvol_set_parent", 00:07:52.220 "bdev_lvol_check_shallow_copy", 00:07:52.220 "bdev_lvol_start_shallow_copy", 00:07:52.220 "bdev_lvol_grow_lvstore", 00:07:52.220 "bdev_lvol_get_lvols", 00:07:52.220 "bdev_lvol_get_lvstores", 00:07:52.220 "bdev_lvol_delete", 00:07:52.220 "bdev_lvol_set_read_only", 00:07:52.220 "bdev_lvol_resize", 00:07:52.220 "bdev_lvol_decouple_parent", 00:07:52.220 "bdev_lvol_inflate", 00:07:52.220 "bdev_lvol_rename", 00:07:52.220 "bdev_lvol_clone_bdev", 00:07:52.220 "bdev_lvol_clone", 00:07:52.220 "bdev_lvol_snapshot", 00:07:52.220 "bdev_lvol_create", 00:07:52.220 "bdev_lvol_delete_lvstore", 00:07:52.220 "bdev_lvol_rename_lvstore", 00:07:52.220 "bdev_lvol_create_lvstore", 00:07:52.220 "bdev_raid_set_options", 00:07:52.220 "bdev_raid_remove_base_bdev", 00:07:52.220 "bdev_raid_add_base_bdev", 00:07:52.220 "bdev_raid_delete", 00:07:52.220 "bdev_raid_create", 00:07:52.220 "bdev_raid_get_bdevs", 00:07:52.220 "bdev_error_inject_error", 00:07:52.220 "bdev_error_delete", 00:07:52.220 "bdev_error_create", 00:07:52.220 "bdev_split_delete", 00:07:52.220 "bdev_split_create", 00:07:52.220 "bdev_delay_delete", 00:07:52.220 "bdev_delay_create", 00:07:52.220 "bdev_delay_update_latency", 00:07:52.220 "bdev_zone_block_delete", 00:07:52.220 "bdev_zone_block_create", 00:07:52.220 "blobfs_create", 00:07:52.220 "blobfs_detect", 00:07:52.220 "blobfs_set_cache_size", 00:07:52.220 "bdev_aio_delete", 00:07:52.220 "bdev_aio_rescan", 00:07:52.220 "bdev_aio_create", 00:07:52.220 "bdev_ftl_set_property", 00:07:52.220 "bdev_ftl_get_properties", 00:07:52.220 "bdev_ftl_get_stats", 00:07:52.220 "bdev_ftl_unmap", 00:07:52.220 "bdev_ftl_unload", 00:07:52.220 "bdev_ftl_delete", 00:07:52.220 "bdev_ftl_load", 00:07:52.220 "bdev_ftl_create", 00:07:52.220 "bdev_virtio_attach_controller", 00:07:52.220 "bdev_virtio_scsi_get_devices", 00:07:52.220 "bdev_virtio_detach_controller", 00:07:52.220 "bdev_virtio_blk_set_hotplug", 00:07:52.220 "bdev_iscsi_delete", 00:07:52.220 "bdev_iscsi_create", 00:07:52.220 "bdev_iscsi_set_options", 00:07:52.220 "accel_error_inject_error", 00:07:52.220 "ioat_scan_accel_module", 00:07:52.220 "dsa_scan_accel_module", 00:07:52.220 "iaa_scan_accel_module", 00:07:52.220 "keyring_file_remove_key", 00:07:52.220 "keyring_file_add_key", 00:07:52.220 "keyring_linux_set_options", 00:07:52.220 "fsdev_aio_delete", 00:07:52.220 "fsdev_aio_create", 00:07:52.220 "iscsi_get_histogram", 00:07:52.220 "iscsi_enable_histogram", 00:07:52.220 "iscsi_set_options", 00:07:52.220 "iscsi_get_auth_groups", 00:07:52.220 "iscsi_auth_group_remove_secret", 00:07:52.220 "iscsi_auth_group_add_secret", 00:07:52.220 "iscsi_delete_auth_group", 00:07:52.220 "iscsi_create_auth_group", 00:07:52.220 "iscsi_set_discovery_auth", 00:07:52.220 "iscsi_get_options", 00:07:52.220 "iscsi_target_node_request_logout", 00:07:52.220 "iscsi_target_node_set_redirect", 00:07:52.220 "iscsi_target_node_set_auth", 00:07:52.220 "iscsi_target_node_add_lun", 00:07:52.220 "iscsi_get_stats", 00:07:52.220 "iscsi_get_connections", 00:07:52.220 "iscsi_portal_group_set_auth", 00:07:52.220 "iscsi_start_portal_group", 00:07:52.220 "iscsi_delete_portal_group", 00:07:52.220 "iscsi_create_portal_group", 00:07:52.220 "iscsi_get_portal_groups", 00:07:52.220 "iscsi_delete_target_node", 00:07:52.220 "iscsi_target_node_remove_pg_ig_maps", 00:07:52.220 "iscsi_target_node_add_pg_ig_maps", 00:07:52.220 "iscsi_create_target_node", 00:07:52.220 "iscsi_get_target_nodes", 00:07:52.220 "iscsi_delete_initiator_group", 00:07:52.220 "iscsi_initiator_group_remove_initiators", 00:07:52.220 "iscsi_initiator_group_add_initiators", 00:07:52.220 "iscsi_create_initiator_group", 00:07:52.220 "iscsi_get_initiator_groups", 00:07:52.220 "nvmf_set_crdt", 00:07:52.220 "nvmf_set_config", 00:07:52.220 "nvmf_set_max_subsystems", 00:07:52.220 "nvmf_stop_mdns_prr", 00:07:52.220 "nvmf_publish_mdns_prr", 00:07:52.220 "nvmf_subsystem_get_listeners", 00:07:52.220 "nvmf_subsystem_get_qpairs", 00:07:52.220 "nvmf_subsystem_get_controllers", 00:07:52.220 "nvmf_get_stats", 00:07:52.220 "nvmf_get_transports", 00:07:52.220 "nvmf_create_transport", 00:07:52.220 "nvmf_get_targets", 00:07:52.220 "nvmf_delete_target", 00:07:52.220 "nvmf_create_target", 00:07:52.220 "nvmf_subsystem_allow_any_host", 00:07:52.220 "nvmf_subsystem_set_keys", 00:07:52.220 "nvmf_subsystem_remove_host", 00:07:52.220 "nvmf_subsystem_add_host", 00:07:52.220 "nvmf_ns_remove_host", 00:07:52.220 "nvmf_ns_add_host", 00:07:52.220 "nvmf_subsystem_remove_ns", 00:07:52.220 "nvmf_subsystem_set_ns_ana_group", 00:07:52.220 "nvmf_subsystem_add_ns", 00:07:52.220 "nvmf_subsystem_listener_set_ana_state", 00:07:52.220 "nvmf_discovery_get_referrals", 00:07:52.220 "nvmf_discovery_remove_referral", 00:07:52.220 "nvmf_discovery_add_referral", 00:07:52.220 "nvmf_subsystem_remove_listener", 00:07:52.220 "nvmf_subsystem_add_listener", 00:07:52.220 "nvmf_delete_subsystem", 00:07:52.220 "nvmf_create_subsystem", 00:07:52.220 "nvmf_get_subsystems", 00:07:52.220 "env_dpdk_get_mem_stats", 00:07:52.220 "nbd_get_disks", 00:07:52.220 "nbd_stop_disk", 00:07:52.220 "nbd_start_disk", 00:07:52.220 "ublk_recover_disk", 00:07:52.220 "ublk_get_disks", 00:07:52.220 "ublk_stop_disk", 00:07:52.220 "ublk_start_disk", 00:07:52.220 "ublk_destroy_target", 00:07:52.220 "ublk_create_target", 00:07:52.220 "virtio_blk_create_transport", 00:07:52.220 "virtio_blk_get_transports", 00:07:52.220 "vhost_controller_set_coalescing", 00:07:52.220 "vhost_get_controllers", 00:07:52.220 "vhost_delete_controller", 00:07:52.220 "vhost_create_blk_controller", 00:07:52.220 "vhost_scsi_controller_remove_target", 00:07:52.220 "vhost_scsi_controller_add_target", 00:07:52.220 "vhost_start_scsi_controller", 00:07:52.220 "vhost_create_scsi_controller", 00:07:52.220 "thread_set_cpumask", 00:07:52.220 "scheduler_set_options", 00:07:52.220 "framework_get_governor", 00:07:52.220 "framework_get_scheduler", 00:07:52.220 "framework_set_scheduler", 00:07:52.220 "framework_get_reactors", 00:07:52.220 "thread_get_io_channels", 00:07:52.220 "thread_get_pollers", 00:07:52.220 "thread_get_stats", 00:07:52.220 "framework_monitor_context_switch", 00:07:52.220 "spdk_kill_instance", 00:07:52.220 "log_enable_timestamps", 00:07:52.220 "log_get_flags", 00:07:52.220 "log_clear_flag", 00:07:52.220 "log_set_flag", 00:07:52.220 "log_get_level", 00:07:52.220 "log_set_level", 00:07:52.220 "log_get_print_level", 00:07:52.220 "log_set_print_level", 00:07:52.220 "framework_enable_cpumask_locks", 00:07:52.220 "framework_disable_cpumask_locks", 00:07:52.220 "framework_wait_init", 00:07:52.220 "framework_start_init", 00:07:52.220 "scsi_get_devices", 00:07:52.220 "bdev_get_histogram", 00:07:52.220 "bdev_enable_histogram", 00:07:52.220 "bdev_set_qos_limit", 00:07:52.220 "bdev_set_qd_sampling_period", 00:07:52.220 "bdev_get_bdevs", 00:07:52.220 "bdev_reset_iostat", 00:07:52.220 "bdev_get_iostat", 00:07:52.220 "bdev_examine", 00:07:52.220 "bdev_wait_for_examine", 00:07:52.220 "bdev_set_options", 00:07:52.220 "accel_get_stats", 00:07:52.220 "accel_set_options", 00:07:52.220 "accel_set_driver", 00:07:52.220 "accel_crypto_key_destroy", 00:07:52.220 "accel_crypto_keys_get", 00:07:52.220 "accel_crypto_key_create", 00:07:52.220 "accel_assign_opc", 00:07:52.220 "accel_get_module_info", 00:07:52.220 "accel_get_opc_assignments", 00:07:52.220 "vmd_rescan", 00:07:52.220 "vmd_remove_device", 00:07:52.220 "vmd_enable", 00:07:52.220 "sock_get_default_impl", 00:07:52.221 "sock_set_default_impl", 00:07:52.221 "sock_impl_set_options", 00:07:52.221 "sock_impl_get_options", 00:07:52.221 "iobuf_get_stats", 00:07:52.221 "iobuf_set_options", 00:07:52.221 "keyring_get_keys", 00:07:52.221 "framework_get_pci_devices", 00:07:52.221 "framework_get_config", 00:07:52.221 "framework_get_subsystems", 00:07:52.221 "fsdev_set_opts", 00:07:52.221 "fsdev_get_opts", 00:07:52.221 "trace_get_info", 00:07:52.221 "trace_get_tpoint_group_mask", 00:07:52.221 "trace_disable_tpoint_group", 00:07:52.221 "trace_enable_tpoint_group", 00:07:52.221 "trace_clear_tpoint_mask", 00:07:52.221 "trace_set_tpoint_mask", 00:07:52.221 "notify_get_notifications", 00:07:52.221 "notify_get_types", 00:07:52.221 "spdk_get_version", 00:07:52.221 "rpc_get_methods" 00:07:52.221 ] 00:07:52.221 08:44:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:52.221 08:44:15 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:52.221 08:44:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.221 08:44:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:52.221 08:44:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 285359 00:07:52.221 08:44:15 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 285359 ']' 00:07:52.221 08:44:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 285359 00:07:52.221 08:44:15 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:52.221 08:44:15 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.221 08:44:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 285359 00:07:52.480 08:44:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.480 08:44:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.480 08:44:15 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 285359' 00:07:52.480 killing process with pid 285359 00:07:52.480 08:44:15 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 285359 00:07:52.480 08:44:15 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 285359 00:07:52.740 00:07:52.740 real 0m1.153s 00:07:52.740 user 0m1.950s 00:07:52.740 sys 0m0.450s 00:07:52.740 08:44:15 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.740 08:44:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.740 ************************************ 00:07:52.740 END TEST spdkcli_tcp 00:07:52.740 ************************************ 00:07:52.740 08:44:15 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:52.740 08:44:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.740 08:44:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.740 08:44:15 -- common/autotest_common.sh@10 -- # set +x 00:07:52.740 ************************************ 00:07:52.740 START TEST dpdk_mem_utility 00:07:52.740 ************************************ 00:07:52.740 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:52.740 * Looking for test storage... 00:07:52.740 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:07:52.740 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:52.740 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:07:52.740 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:52.999 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:52.999 08:44:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.000 08:44:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.000 08:44:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.000 08:44:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.000 --rc genhtml_branch_coverage=1 00:07:53.000 --rc genhtml_function_coverage=1 00:07:53.000 --rc genhtml_legend=1 00:07:53.000 --rc geninfo_all_blocks=1 00:07:53.000 --rc geninfo_unexecuted_blocks=1 00:07:53.000 00:07:53.000 ' 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.000 --rc genhtml_branch_coverage=1 00:07:53.000 --rc genhtml_function_coverage=1 00:07:53.000 --rc genhtml_legend=1 00:07:53.000 --rc geninfo_all_blocks=1 00:07:53.000 --rc geninfo_unexecuted_blocks=1 00:07:53.000 00:07:53.000 ' 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.000 --rc genhtml_branch_coverage=1 00:07:53.000 --rc genhtml_function_coverage=1 00:07:53.000 --rc genhtml_legend=1 00:07:53.000 --rc geninfo_all_blocks=1 00:07:53.000 --rc geninfo_unexecuted_blocks=1 00:07:53.000 00:07:53.000 ' 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.000 --rc genhtml_branch_coverage=1 00:07:53.000 --rc genhtml_function_coverage=1 00:07:53.000 --rc genhtml_legend=1 00:07:53.000 --rc geninfo_all_blocks=1 00:07:53.000 --rc geninfo_unexecuted_blocks=1 00:07:53.000 00:07:53.000 ' 00:07:53.000 08:44:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:53.000 08:44:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=285667 00:07:53.000 08:44:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 285667 00:07:53.000 08:44:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 285667 ']' 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.000 08:44:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.000 [2024-11-06 08:44:15.831013] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:53.000 [2024-11-06 08:44:15.831058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285667 ] 00:07:53.000 [2024-11-06 08:44:15.901854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.000 [2024-11-06 08:44:15.943727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.259 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.259 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:53.259 08:44:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:53.259 08:44:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:53.259 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.259 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.259 { 00:07:53.259 "filename": "/tmp/spdk_mem_dump.txt" 00:07:53.259 } 00:07:53.259 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.259 08:44:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:53.259 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:53.259 1 heaps totaling size 818.000000 MiB 00:07:53.259 size: 818.000000 MiB heap id: 0 00:07:53.259 end heaps---------- 00:07:53.259 9 mempools totaling size 603.782043 MiB 00:07:53.259 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:53.259 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:53.259 size: 100.555481 MiB name: bdev_io_285667 00:07:53.259 size: 50.003479 MiB name: msgpool_285667 00:07:53.259 size: 36.509338 MiB name: fsdev_io_285667 00:07:53.259 size: 21.763794 MiB name: PDU_Pool 00:07:53.259 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:53.259 size: 4.133484 MiB name: evtpool_285667 00:07:53.259 size: 0.026123 MiB name: Session_Pool 00:07:53.259 end mempools------- 00:07:53.259 6 memzones totaling size 4.142822 MiB 00:07:53.259 size: 1.000366 MiB name: RG_ring_0_285667 00:07:53.259 size: 1.000366 MiB name: RG_ring_1_285667 00:07:53.259 size: 1.000366 MiB name: RG_ring_4_285667 00:07:53.259 size: 1.000366 MiB name: RG_ring_5_285667 00:07:53.259 size: 0.125366 MiB name: RG_ring_2_285667 00:07:53.259 size: 0.015991 MiB name: RG_ring_3_285667 00:07:53.259 end memzones------- 00:07:53.259 08:44:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:53.519 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:53.519 list of free elements. size: 10.852478 MiB 00:07:53.519 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:53.519 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:53.519 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:53.519 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:53.519 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:53.519 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:53.519 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:53.519 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:53.519 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:07:53.519 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:53.519 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:53.519 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:53.519 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:53.519 element at address: 0x200028200000 with size: 0.410034 MiB 00:07:53.519 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:53.519 list of standard malloc elements. size: 199.218628 MiB 00:07:53.519 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:53.519 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:53.519 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:53.519 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:53.519 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:53.519 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:53.519 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:53.519 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:53.519 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:53.519 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:53.519 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:53.519 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:53.519 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:53.519 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:53.519 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:53.519 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:53.519 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:53.519 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:53.520 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:53.520 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:53.520 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:53.520 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:53.520 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:53.520 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:53.520 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:53.520 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:53.520 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:53.520 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:53.520 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:53.520 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:53.520 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:53.520 element at address: 0x200028268f80 with size: 0.000183 MiB 00:07:53.520 element at address: 0x200028269040 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:53.520 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:53.520 list of memzone associated elements. size: 607.928894 MiB 00:07:53.520 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:53.520 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:53.520 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:53.520 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:53.520 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:53.520 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_285667_0 00:07:53.520 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:53.520 associated memzone info: size: 48.002930 MiB name: MP_msgpool_285667_0 00:07:53.520 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:53.520 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_285667_0 00:07:53.520 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:53.520 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:53.520 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:53.520 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:53.520 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:53.520 associated memzone info: size: 3.000122 MiB name: MP_evtpool_285667_0 00:07:53.520 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:53.520 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_285667 00:07:53.520 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:53.520 associated memzone info: size: 1.007996 MiB name: MP_evtpool_285667 00:07:53.520 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:53.520 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:53.520 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:53.520 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:53.520 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:53.520 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:53.520 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:53.520 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:53.520 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:53.520 associated memzone info: size: 1.000366 MiB name: RG_ring_0_285667 00:07:53.520 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:53.520 associated memzone info: size: 1.000366 MiB name: RG_ring_1_285667 00:07:53.520 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:53.520 associated memzone info: size: 1.000366 MiB name: RG_ring_4_285667 00:07:53.520 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:53.520 associated memzone info: size: 1.000366 MiB name: RG_ring_5_285667 00:07:53.520 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:53.520 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_285667 00:07:53.520 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:53.520 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_285667 00:07:53.520 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:53.520 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:53.520 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:53.520 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:53.520 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:53.520 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:53.520 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:53.520 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_285667 00:07:53.520 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:53.520 associated memzone info: size: 0.125366 MiB name: RG_ring_2_285667 00:07:53.520 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:53.520 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:53.520 element at address: 0x200028269100 with size: 0.023743 MiB 00:07:53.520 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:53.520 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:53.520 associated memzone info: size: 0.015991 MiB name: RG_ring_3_285667 00:07:53.520 element at address: 0x20002826f240 with size: 0.002441 MiB 00:07:53.520 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:53.520 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:53.520 associated memzone info: size: 0.000183 MiB name: MP_msgpool_285667 00:07:53.520 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:53.520 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_285667 00:07:53.520 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:53.520 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_285667 00:07:53.520 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:07:53.520 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:53.520 08:44:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:53.520 08:44:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 285667 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 285667 ']' 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 285667 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 285667 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 285667' 00:07:53.520 killing process with pid 285667 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 285667 00:07:53.520 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 285667 00:07:53.779 00:07:53.779 real 0m1.016s 00:07:53.779 user 0m0.947s 00:07:53.779 sys 0m0.401s 00:07:53.779 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.779 08:44:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.779 ************************************ 00:07:53.779 END TEST dpdk_mem_utility 00:07:53.779 ************************************ 00:07:53.779 08:44:16 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:53.779 08:44:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.779 08:44:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.779 08:44:16 -- common/autotest_common.sh@10 -- # set +x 00:07:53.779 ************************************ 00:07:53.779 START TEST event 00:07:53.779 ************************************ 00:07:53.779 08:44:16 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:53.779 * Looking for test storage... 00:07:53.779 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:53.779 08:44:16 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:53.779 08:44:16 event -- common/autotest_common.sh@1689 -- # lcov --version 00:07:53.779 08:44:16 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:54.039 08:44:16 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:54.039 08:44:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.039 08:44:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.039 08:44:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.039 08:44:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.039 08:44:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.039 08:44:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.039 08:44:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.039 08:44:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.039 08:44:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.039 08:44:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.039 08:44:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.039 08:44:16 event -- scripts/common.sh@344 -- # case "$op" in 00:07:54.039 08:44:16 event -- scripts/common.sh@345 -- # : 1 00:07:54.039 08:44:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.039 08:44:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.039 08:44:16 event -- scripts/common.sh@365 -- # decimal 1 00:07:54.039 08:44:16 event -- scripts/common.sh@353 -- # local d=1 00:07:54.039 08:44:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.039 08:44:16 event -- scripts/common.sh@355 -- # echo 1 00:07:54.039 08:44:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.039 08:44:16 event -- scripts/common.sh@366 -- # decimal 2 00:07:54.039 08:44:16 event -- scripts/common.sh@353 -- # local d=2 00:07:54.039 08:44:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.039 08:44:16 event -- scripts/common.sh@355 -- # echo 2 00:07:54.039 08:44:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.039 08:44:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.039 08:44:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.039 08:44:16 event -- scripts/common.sh@368 -- # return 0 00:07:54.039 08:44:16 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.039 08:44:16 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:54.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.039 --rc genhtml_branch_coverage=1 00:07:54.039 --rc genhtml_function_coverage=1 00:07:54.039 --rc genhtml_legend=1 00:07:54.039 --rc geninfo_all_blocks=1 00:07:54.039 --rc geninfo_unexecuted_blocks=1 00:07:54.039 00:07:54.039 ' 00:07:54.039 08:44:16 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:54.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.039 --rc genhtml_branch_coverage=1 00:07:54.039 --rc genhtml_function_coverage=1 00:07:54.039 --rc genhtml_legend=1 00:07:54.039 --rc geninfo_all_blocks=1 00:07:54.039 --rc geninfo_unexecuted_blocks=1 00:07:54.039 00:07:54.039 ' 00:07:54.039 08:44:16 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:54.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.039 --rc genhtml_branch_coverage=1 00:07:54.039 --rc genhtml_function_coverage=1 00:07:54.039 --rc genhtml_legend=1 00:07:54.039 --rc geninfo_all_blocks=1 00:07:54.039 --rc geninfo_unexecuted_blocks=1 00:07:54.039 00:07:54.039 ' 00:07:54.039 08:44:16 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:54.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.039 --rc genhtml_branch_coverage=1 00:07:54.039 --rc genhtml_function_coverage=1 00:07:54.039 --rc genhtml_legend=1 00:07:54.039 --rc geninfo_all_blocks=1 00:07:54.039 --rc geninfo_unexecuted_blocks=1 00:07:54.039 00:07:54.039 ' 00:07:54.039 08:44:16 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:54.039 08:44:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:54.039 08:44:16 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:54.039 08:44:16 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:54.039 08:44:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.039 08:44:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:54.039 ************************************ 00:07:54.039 START TEST event_perf 00:07:54.039 ************************************ 00:07:54.039 08:44:16 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:54.039 Running I/O for 1 seconds...[2024-11-06 08:44:16.921680] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:54.039 [2024-11-06 08:44:16.921747] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285959 ] 00:07:54.039 [2024-11-06 08:44:17.001906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.039 [2024-11-06 08:44:17.045526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.039 [2024-11-06 08:44:17.045640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.039 [2024-11-06 08:44:17.045744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.039 [2024-11-06 08:44:17.045745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.418 Running I/O for 1 seconds... 00:07:55.418 lcore 0: 203389 00:07:55.418 lcore 1: 203387 00:07:55.418 lcore 2: 203387 00:07:55.418 lcore 3: 203387 00:07:55.418 done. 00:07:55.418 00:07:55.418 real 0m1.185s 00:07:55.418 user 0m4.093s 00:07:55.418 sys 0m0.088s 00:07:55.418 08:44:18 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.418 08:44:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.418 ************************************ 00:07:55.418 END TEST event_perf 00:07:55.418 ************************************ 00:07:55.418 08:44:18 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:55.418 08:44:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:55.418 08:44:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.418 08:44:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.418 ************************************ 00:07:55.418 START TEST event_reactor 00:07:55.418 ************************************ 00:07:55.418 08:44:18 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:55.418 [2024-11-06 08:44:18.175325] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:55.418 [2024-11-06 08:44:18.175392] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286212 ] 00:07:55.418 [2024-11-06 08:44:18.255422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.418 [2024-11-06 08:44:18.295585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.356 test_start 00:07:56.356 oneshot 00:07:56.356 tick 100 00:07:56.356 tick 100 00:07:56.356 tick 250 00:07:56.356 tick 100 00:07:56.356 tick 100 00:07:56.356 tick 100 00:07:56.356 tick 250 00:07:56.356 tick 500 00:07:56.356 tick 100 00:07:56.356 tick 100 00:07:56.356 tick 250 00:07:56.356 tick 100 00:07:56.356 tick 100 00:07:56.356 test_end 00:07:56.356 00:07:56.356 real 0m1.179s 00:07:56.356 user 0m1.097s 00:07:56.356 sys 0m0.077s 00:07:56.356 08:44:19 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.356 08:44:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:56.356 ************************************ 00:07:56.356 END TEST event_reactor 00:07:56.356 ************************************ 00:07:56.356 08:44:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:56.356 08:44:19 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:56.356 08:44:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.356 08:44:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.615 ************************************ 00:07:56.615 START TEST event_reactor_perf 00:07:56.615 ************************************ 00:07:56.615 08:44:19 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:56.615 [2024-11-06 08:44:19.420800] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:56.615 [2024-11-06 08:44:19.420867] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286460 ] 00:07:56.615 [2024-11-06 08:44:19.498578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.615 [2024-11-06 08:44:19.537616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.994 test_start 00:07:57.994 test_end 00:07:57.994 Performance: 515146 events per second 00:07:57.994 00:07:57.994 real 0m1.179s 00:07:57.994 user 0m1.101s 00:07:57.994 sys 0m0.074s 00:07:57.994 08:44:20 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.994 08:44:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.994 ************************************ 00:07:57.994 END TEST event_reactor_perf 00:07:57.994 ************************************ 00:07:57.994 08:44:20 event -- event/event.sh@49 -- # uname -s 00:07:57.994 08:44:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:57.994 08:44:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:57.994 08:44:20 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.994 08:44:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.994 08:44:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.994 ************************************ 00:07:57.994 START TEST event_scheduler 00:07:57.994 ************************************ 00:07:57.994 08:44:20 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:57.994 * Looking for test storage... 00:07:57.994 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:07:57.994 08:44:20 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:57.994 08:44:20 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:07:57.994 08:44:20 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:57.994 08:44:20 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.994 08:44:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:57.995 08:44:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.995 08:44:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:57.995 08:44:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:57.995 08:44:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.995 08:44:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:57.995 08:44:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.995 08:44:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.995 08:44:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.995 08:44:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:57.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.995 --rc genhtml_branch_coverage=1 00:07:57.995 --rc genhtml_function_coverage=1 00:07:57.995 --rc genhtml_legend=1 00:07:57.995 --rc geninfo_all_blocks=1 00:07:57.995 --rc geninfo_unexecuted_blocks=1 00:07:57.995 00:07:57.995 ' 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:57.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.995 --rc genhtml_branch_coverage=1 00:07:57.995 --rc genhtml_function_coverage=1 00:07:57.995 --rc genhtml_legend=1 00:07:57.995 --rc geninfo_all_blocks=1 00:07:57.995 --rc geninfo_unexecuted_blocks=1 00:07:57.995 00:07:57.995 ' 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:57.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.995 --rc genhtml_branch_coverage=1 00:07:57.995 --rc genhtml_function_coverage=1 00:07:57.995 --rc genhtml_legend=1 00:07:57.995 --rc geninfo_all_blocks=1 00:07:57.995 --rc geninfo_unexecuted_blocks=1 00:07:57.995 00:07:57.995 ' 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:57.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.995 --rc genhtml_branch_coverage=1 00:07:57.995 --rc genhtml_function_coverage=1 00:07:57.995 --rc genhtml_legend=1 00:07:57.995 --rc geninfo_all_blocks=1 00:07:57.995 --rc geninfo_unexecuted_blocks=1 00:07:57.995 00:07:57.995 ' 00:07:57.995 08:44:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:57.995 08:44:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=286740 00:07:57.995 08:44:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:57.995 08:44:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.995 08:44:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 286740 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 286740 ']' 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.995 08:44:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.995 [2024-11-06 08:44:20.870812] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:57.995 [2024-11-06 08:44:20.870861] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286740 ] 00:07:57.995 [2024-11-06 08:44:20.945617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.995 [2024-11-06 08:44:20.987819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.995 [2024-11-06 08:44:20.987929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.995 [2024-11-06 08:44:20.988033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.995 [2024-11-06 08:44:20.988035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:58.255 08:44:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 [2024-11-06 08:44:21.040575] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:58.255 [2024-11-06 08:44:21.040592] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:58.255 [2024-11-06 08:44:21.040601] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:58.255 [2024-11-06 08:44:21.040607] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:58.255 [2024-11-06 08:44:21.040613] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 [2024-11-06 08:44:21.119026] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 ************************************ 00:07:58.255 START TEST scheduler_create_thread 00:07:58.255 ************************************ 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 2 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 3 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 4 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 5 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 6 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 7 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 8 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 9 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 10 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.255 08:44:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.194 08:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.194 08:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:59.194 08:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.194 08:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.574 08:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.574 08:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:00.574 08:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:00.574 08:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.574 08:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.953 08:44:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.953 00:08:01.953 real 0m3.381s 00:08:01.953 user 0m0.025s 00:08:01.953 sys 0m0.004s 00:08:01.953 08:44:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.953 08:44:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.953 ************************************ 00:08:01.953 END TEST scheduler_create_thread 00:08:01.953 ************************************ 00:08:01.953 08:44:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:01.953 08:44:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 286740 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 286740 ']' 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 286740 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286740 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286740' 00:08:01.953 killing process with pid 286740 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 286740 00:08:01.953 08:44:24 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 286740 00:08:01.953 [2024-11-06 08:44:24.919015] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:02.213 00:08:02.213 real 0m4.471s 00:08:02.213 user 0m7.838s 00:08:02.213 sys 0m0.384s 00:08:02.213 08:44:25 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.213 08:44:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:02.213 ************************************ 00:08:02.213 END TEST event_scheduler 00:08:02.213 ************************************ 00:08:02.213 08:44:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:02.213 08:44:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:02.213 08:44:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.213 08:44:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.213 08:44:25 event -- common/autotest_common.sh@10 -- # set +x 00:08:02.213 ************************************ 00:08:02.213 START TEST app_repeat 00:08:02.213 ************************************ 00:08:02.213 08:44:25 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=287491 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 287491' 00:08:02.213 Process app_repeat pid: 287491 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:02.213 spdk_app_start Round 0 00:08:02.213 08:44:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 287491 /var/tmp/spdk-nbd.sock 00:08:02.213 08:44:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 287491 ']' 00:08:02.213 08:44:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.213 08:44:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.213 08:44:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.213 08:44:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.213 08:44:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:02.472 [2024-11-06 08:44:25.234212] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:02.472 [2024-11-06 08:44:25.234265] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287491 ] 00:08:02.472 [2024-11-06 08:44:25.314446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.472 [2024-11-06 08:44:25.355053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.472 [2024-11-06 08:44:25.355053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.472 08:44:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.472 08:44:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:02.473 08:44:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.731 Malloc0 00:08:02.731 08:44:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.991 Malloc1 00:08:02.991 08:44:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.991 08:44:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:03.250 /dev/nbd0 00:08:03.250 08:44:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:03.250 08:44:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:03.250 1+0 records in 00:08:03.250 1+0 records out 00:08:03.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020958 s, 19.5 MB/s 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.250 08:44:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:03.250 08:44:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.250 08:44:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.250 08:44:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:03.509 /dev/nbd1 00:08:03.509 08:44:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:03.509 08:44:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:03.509 1+0 records in 00:08:03.509 1+0 records out 00:08:03.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177839 s, 23.0 MB/s 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.509 08:44:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:03.509 08:44:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.509 08:44:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.509 08:44:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.509 08:44:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.509 08:44:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:03.768 { 00:08:03.768 "nbd_device": "/dev/nbd0", 00:08:03.768 "bdev_name": "Malloc0" 00:08:03.768 }, 00:08:03.768 { 00:08:03.768 "nbd_device": "/dev/nbd1", 00:08:03.768 "bdev_name": "Malloc1" 00:08:03.768 } 00:08:03.768 ]' 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:03.768 { 00:08:03.768 "nbd_device": "/dev/nbd0", 00:08:03.768 "bdev_name": "Malloc0" 00:08:03.768 }, 00:08:03.768 { 00:08:03.768 "nbd_device": "/dev/nbd1", 00:08:03.768 "bdev_name": "Malloc1" 00:08:03.768 } 00:08:03.768 ]' 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:03.768 /dev/nbd1' 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:03.768 /dev/nbd1' 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.768 08:44:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:03.769 256+0 records in 00:08:03.769 256+0 records out 00:08:03.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010678 s, 98.2 MB/s 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.769 256+0 records in 00:08:03.769 256+0 records out 00:08:03.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013911 s, 75.4 MB/s 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:03.769 256+0 records in 00:08:03.769 256+0 records out 00:08:03.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014478 s, 72.4 MB/s 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.769 08:44:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.028 08:44:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.287 08:44:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:04.546 08:44:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:04.546 08:44:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:04.805 08:44:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:04.805 [2024-11-06 08:44:27.717009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.805 [2024-11-06 08:44:27.754110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.805 [2024-11-06 08:44:27.754110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.805 [2024-11-06 08:44:27.794550] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:04.805 [2024-11-06 08:44:27.794587] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:08.093 08:44:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:08.093 08:44:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:08.093 spdk_app_start Round 1 00:08:08.093 08:44:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 287491 /var/tmp/spdk-nbd.sock 00:08:08.093 08:44:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 287491 ']' 00:08:08.093 08:44:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:08.093 08:44:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.093 08:44:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:08.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:08.093 08:44:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.093 08:44:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:08.093 08:44:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.093 08:44:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:08.093 08:44:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:08.093 Malloc0 00:08:08.093 08:44:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:08.353 Malloc1 00:08:08.353 08:44:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.353 08:44:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:08.613 /dev/nbd0 00:08:08.613 08:44:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:08.613 08:44:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:08.613 1+0 records in 00:08:08.613 1+0 records out 00:08:08.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236288 s, 17.3 MB/s 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.613 08:44:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:08.613 08:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.613 08:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.613 08:44:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:08.872 /dev/nbd1 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:08.872 1+0 records in 00:08:08.872 1+0 records out 00:08:08.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227372 s, 18.0 MB/s 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.872 08:44:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:08.872 { 00:08:08.872 "nbd_device": "/dev/nbd0", 00:08:08.872 "bdev_name": "Malloc0" 00:08:08.872 }, 00:08:08.872 { 00:08:08.872 "nbd_device": "/dev/nbd1", 00:08:08.872 "bdev_name": "Malloc1" 00:08:08.872 } 00:08:08.872 ]' 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.872 08:44:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:08.872 { 00:08:08.872 "nbd_device": "/dev/nbd0", 00:08:08.872 "bdev_name": "Malloc0" 00:08:08.872 }, 00:08:08.872 { 00:08:08.872 "nbd_device": "/dev/nbd1", 00:08:08.872 "bdev_name": "Malloc1" 00:08:08.872 } 00:08:08.872 ]' 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:09.132 /dev/nbd1' 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:09.132 /dev/nbd1' 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:09.132 256+0 records in 00:08:09.132 256+0 records out 00:08:09.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106344 s, 98.6 MB/s 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:09.132 256+0 records in 00:08:09.132 256+0 records out 00:08:09.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014743 s, 71.1 MB/s 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:09.132 256+0 records in 00:08:09.132 256+0 records out 00:08:09.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152443 s, 68.8 MB/s 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.132 08:44:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.392 08:44:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.652 08:44:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:09.912 08:44:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:09.912 08:44:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.912 08:44:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:09.912 08:44:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:09.912 08:44:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:09.912 08:44:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:09.912 08:44:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:09.912 08:44:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:09.912 08:44:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:09.912 08:44:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:10.171 [2024-11-06 08:44:33.041426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.171 [2024-11-06 08:44:33.077710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.171 [2024-11-06 08:44:33.077710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.171 [2024-11-06 08:44:33.118998] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:10.171 [2024-11-06 08:44:33.119038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:13.463 08:44:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:13.463 08:44:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:13.463 spdk_app_start Round 2 00:08:13.463 08:44:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 287491 /var/tmp/spdk-nbd.sock 00:08:13.463 08:44:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 287491 ']' 00:08:13.463 08:44:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:13.463 08:44:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.463 08:44:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:13.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:13.463 08:44:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.463 08:44:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:13.463 08:44:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.463 08:44:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:13.463 08:44:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:13.463 Malloc0 00:08:13.463 08:44:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:13.463 Malloc1 00:08:13.723 08:44:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:13.723 /dev/nbd0 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:13.723 1+0 records in 00:08:13.723 1+0 records out 00:08:13.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000973625 s, 4.2 MB/s 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:13.723 08:44:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:13.723 08:44:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:13.983 /dev/nbd1 00:08:13.983 08:44:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:13.983 08:44:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:13.983 1+0 records in 00:08:13.983 1+0 records out 00:08:13.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209 s, 19.6 MB/s 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:13.983 08:44:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:13.983 08:44:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.983 08:44:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:13.983 08:44:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.983 08:44:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.983 08:44:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:14.242 { 00:08:14.242 "nbd_device": "/dev/nbd0", 00:08:14.242 "bdev_name": "Malloc0" 00:08:14.242 }, 00:08:14.242 { 00:08:14.242 "nbd_device": "/dev/nbd1", 00:08:14.242 "bdev_name": "Malloc1" 00:08:14.242 } 00:08:14.242 ]' 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:14.242 { 00:08:14.242 "nbd_device": "/dev/nbd0", 00:08:14.242 "bdev_name": "Malloc0" 00:08:14.242 }, 00:08:14.242 { 00:08:14.242 "nbd_device": "/dev/nbd1", 00:08:14.242 "bdev_name": "Malloc1" 00:08:14.242 } 00:08:14.242 ]' 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:14.242 /dev/nbd1' 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:14.242 /dev/nbd1' 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:14.242 256+0 records in 00:08:14.242 256+0 records out 00:08:14.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106021 s, 98.9 MB/s 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.242 08:44:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:14.502 256+0 records in 00:08:14.502 256+0 records out 00:08:14.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141102 s, 74.3 MB/s 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:14.502 256+0 records in 00:08:14.502 256+0 records out 00:08:14.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144755 s, 72.4 MB/s 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:14.502 08:44:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.762 08:44:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:15.021 08:44:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:15.021 08:44:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:15.281 08:44:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:15.540 [2024-11-06 08:44:38.332247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.540 [2024-11-06 08:44:38.371653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.540 [2024-11-06 08:44:38.371654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.540 [2024-11-06 08:44:38.412638] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:15.540 [2024-11-06 08:44:38.412682] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:18.829 08:44:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 287491 /var/tmp/spdk-nbd.sock 00:08:18.829 08:44:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 287491 ']' 00:08:18.829 08:44:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:18.829 08:44:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:18.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:18.830 08:44:41 event.app_repeat -- event/event.sh@39 -- # killprocess 287491 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 287491 ']' 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 287491 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 287491 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 287491' 00:08:18.830 killing process with pid 287491 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@969 -- # kill 287491 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@974 -- # wait 287491 00:08:18.830 spdk_app_start is called in Round 0. 00:08:18.830 Shutdown signal received, stop current app iteration 00:08:18.830 Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 reinitialization... 00:08:18.830 spdk_app_start is called in Round 1. 00:08:18.830 Shutdown signal received, stop current app iteration 00:08:18.830 Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 reinitialization... 00:08:18.830 spdk_app_start is called in Round 2. 00:08:18.830 Shutdown signal received, stop current app iteration 00:08:18.830 Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 reinitialization... 00:08:18.830 spdk_app_start is called in Round 3. 00:08:18.830 Shutdown signal received, stop current app iteration 00:08:18.830 08:44:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:18.830 08:44:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:18.830 00:08:18.830 real 0m16.380s 00:08:18.830 user 0m35.993s 00:08:18.830 sys 0m2.489s 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.830 08:44:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:18.830 ************************************ 00:08:18.830 END TEST app_repeat 00:08:18.830 ************************************ 00:08:18.830 08:44:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:18.830 08:44:41 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:18.830 08:44:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.830 08:44:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.830 08:44:41 event -- common/autotest_common.sh@10 -- # set +x 00:08:18.830 ************************************ 00:08:18.830 START TEST cpu_locks 00:08:18.830 ************************************ 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:18.830 * Looking for test storage... 00:08:18.830 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.830 08:44:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:18.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.830 --rc genhtml_branch_coverage=1 00:08:18.830 --rc genhtml_function_coverage=1 00:08:18.830 --rc genhtml_legend=1 00:08:18.830 --rc geninfo_all_blocks=1 00:08:18.830 --rc geninfo_unexecuted_blocks=1 00:08:18.830 00:08:18.830 ' 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:18.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.830 --rc genhtml_branch_coverage=1 00:08:18.830 --rc genhtml_function_coverage=1 00:08:18.830 --rc genhtml_legend=1 00:08:18.830 --rc geninfo_all_blocks=1 00:08:18.830 --rc geninfo_unexecuted_blocks=1 00:08:18.830 00:08:18.830 ' 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:18.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.830 --rc genhtml_branch_coverage=1 00:08:18.830 --rc genhtml_function_coverage=1 00:08:18.830 --rc genhtml_legend=1 00:08:18.830 --rc geninfo_all_blocks=1 00:08:18.830 --rc geninfo_unexecuted_blocks=1 00:08:18.830 00:08:18.830 ' 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:18.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.830 --rc genhtml_branch_coverage=1 00:08:18.830 --rc genhtml_function_coverage=1 00:08:18.830 --rc genhtml_legend=1 00:08:18.830 --rc geninfo_all_blocks=1 00:08:18.830 --rc geninfo_unexecuted_blocks=1 00:08:18.830 00:08:18.830 ' 00:08:18.830 08:44:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:18.830 08:44:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:18.830 08:44:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:18.830 08:44:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.830 08:44:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:19.090 ************************************ 00:08:19.090 START TEST default_locks 00:08:19.090 ************************************ 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=290486 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 290486 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 290486 ']' 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.090 08:44:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:19.090 [2024-11-06 08:44:41.910767] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:19.090 [2024-11-06 08:44:41.910805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290486 ] 00:08:19.091 [2024-11-06 08:44:41.984766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.091 [2024-11-06 08:44:42.026556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.350 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.350 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:19.350 08:44:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 290486 00:08:19.350 08:44:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 290486 00:08:19.350 08:44:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:19.610 lslocks: write error 00:08:19.610 08:44:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 290486 00:08:19.610 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 290486 ']' 00:08:19.610 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 290486 00:08:19.610 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:19.610 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.610 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 290486 00:08:19.869 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.869 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.869 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 290486' 00:08:19.869 killing process with pid 290486 00:08:19.869 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 290486 00:08:19.869 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 290486 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 290486 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 290486 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 290486 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 290486 ']' 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.129 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (290486) - No such process 00:08:20.129 ERROR: process (pid: 290486) is no longer running 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:20.129 00:08:20.129 real 0m1.088s 00:08:20.129 user 0m1.047s 00:08:20.129 sys 0m0.488s 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.129 08:44:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.129 ************************************ 00:08:20.129 END TEST default_locks 00:08:20.129 ************************************ 00:08:20.129 08:44:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:20.129 08:44:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.129 08:44:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.129 08:44:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.129 ************************************ 00:08:20.129 START TEST default_locks_via_rpc 00:08:20.129 ************************************ 00:08:20.129 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:20.129 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=290743 00:08:20.130 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 290743 00:08:20.130 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.130 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 290743 ']' 00:08:20.130 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.130 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.130 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.130 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.130 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.130 [2024-11-06 08:44:43.068192] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:20.130 [2024-11-06 08:44:43.068252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290743 ] 00:08:20.130 [2024-11-06 08:44:43.127808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.388 [2024-11-06 08:44:43.171744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.388 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.388 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:20.388 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:20.388 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.388 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 290743 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 290743 00:08:20.646 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 290743 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 290743 ']' 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 290743 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 290743 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 290743' 00:08:20.906 killing process with pid 290743 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 290743 00:08:20.906 08:44:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 290743 00:08:21.474 00:08:21.474 real 0m1.168s 00:08:21.474 user 0m1.170s 00:08:21.474 sys 0m0.502s 00:08:21.474 08:44:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.474 08:44:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.474 ************************************ 00:08:21.474 END TEST default_locks_via_rpc 00:08:21.474 ************************************ 00:08:21.474 08:44:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:21.474 08:44:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.474 08:44:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.474 08:44:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:21.474 ************************************ 00:08:21.474 START TEST non_locking_app_on_locked_coremask 00:08:21.474 ************************************ 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=290999 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 290999 /var/tmp/spdk.sock 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 290999 ']' 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.474 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.474 [2024-11-06 08:44:44.301092] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:21.474 [2024-11-06 08:44:44.301131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290999 ] 00:08:21.474 [2024-11-06 08:44:44.373378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.474 [2024-11-06 08:44:44.415392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=291013 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 291013 /var/tmp/spdk2.sock 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 291013 ']' 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.733 08:44:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.733 [2024-11-06 08:44:44.676107] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:21.733 [2024-11-06 08:44:44.676151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291013 ] 00:08:21.992 [2024-11-06 08:44:44.758881] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:21.992 [2024-11-06 08:44:44.758903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.992 [2024-11-06 08:44:44.843222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.559 08:44:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.559 08:44:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:22.559 08:44:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 290999 00:08:22.559 08:44:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 290999 00:08:22.559 08:44:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:23.127 lslocks: write error 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 290999 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 290999 ']' 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 290999 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 290999 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 290999' 00:08:23.127 killing process with pid 290999 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 290999 00:08:23.127 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 290999 00:08:24.064 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 291013 00:08:24.064 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 291013 ']' 00:08:24.065 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 291013 00:08:24.065 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:24.065 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.065 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 291013 00:08:24.065 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.065 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.065 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 291013' 00:08:24.065 killing process with pid 291013 00:08:24.065 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 291013 00:08:24.065 08:44:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 291013 00:08:24.065 00:08:24.065 real 0m2.822s 00:08:24.065 user 0m2.981s 00:08:24.065 sys 0m0.907s 00:08:24.065 08:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.065 08:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.065 ************************************ 00:08:24.065 END TEST non_locking_app_on_locked_coremask 00:08:24.065 ************************************ 00:08:24.324 08:44:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:24.324 08:44:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.324 08:44:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.324 08:44:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:24.324 ************************************ 00:08:24.324 START TEST locking_app_on_unlocked_coremask 00:08:24.324 ************************************ 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=291501 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 291501 /var/tmp/spdk.sock 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 291501 ']' 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.324 08:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.324 [2024-11-06 08:44:47.191951] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:24.324 [2024-11-06 08:44:47.191995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291501 ] 00:08:24.324 [2024-11-06 08:44:47.268280] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:24.324 [2024-11-06 08:44:47.268305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.324 [2024-11-06 08:44:47.308186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=291661 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 291661 /var/tmp/spdk2.sock 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 291661 ']' 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.259 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.260 [2024-11-06 08:44:48.061477] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:25.260 [2024-11-06 08:44:48.061526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291661 ] 00:08:25.260 [2024-11-06 08:44:48.152509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.260 [2024-11-06 08:44:48.232908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.196 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.196 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:26.196 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 291661 00:08:26.196 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 291661 00:08:26.196 08:44:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:26.454 lslocks: write error 00:08:26.454 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 291501 00:08:26.454 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 291501 ']' 00:08:26.454 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 291501 00:08:26.454 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:26.454 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.454 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 291501 00:08:26.714 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.714 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.714 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 291501' 00:08:26.714 killing process with pid 291501 00:08:26.714 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 291501 00:08:26.714 08:44:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 291501 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 291661 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 291661 ']' 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 291661 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 291661 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 291661' 00:08:27.283 killing process with pid 291661 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 291661 00:08:27.283 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 291661 00:08:27.542 00:08:27.542 real 0m3.329s 00:08:27.542 user 0m3.588s 00:08:27.542 sys 0m1.001s 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.542 ************************************ 00:08:27.542 END TEST locking_app_on_unlocked_coremask 00:08:27.542 ************************************ 00:08:27.542 08:44:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:27.542 08:44:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.542 08:44:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.542 08:44:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:27.542 ************************************ 00:08:27.542 START TEST locking_app_on_locked_coremask 00:08:27.542 ************************************ 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=292017 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 292017 /var/tmp/spdk.sock 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 292017 ']' 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.542 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.801 [2024-11-06 08:44:50.591654] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:27.801 [2024-11-06 08:44:50.591697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292017 ] 00:08:27.801 [2024-11-06 08:44:50.668223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.801 [2024-11-06 08:44:50.708113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=292231 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 292231 /var/tmp/spdk2.sock 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 292231 /var/tmp/spdk2.sock 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 292231 /var/tmp/spdk2.sock 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 292231 ']' 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:28.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.060 08:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.060 [2024-11-06 08:44:50.976103] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:28.060 [2024-11-06 08:44:50.976149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292231 ] 00:08:28.060 [2024-11-06 08:44:51.061918] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 292017 has claimed it. 00:08:28.060 [2024-11-06 08:44:51.061960] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:28.627 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (292231) - No such process 00:08:28.627 ERROR: process (pid: 292231) is no longer running 00:08:28.627 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.627 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:28.627 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:28.627 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.627 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.627 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.627 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 292017 00:08:28.627 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 292017 00:08:28.627 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:29.194 lslocks: write error 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 292017 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 292017 ']' 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 292017 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 292017 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 292017' 00:08:29.194 killing process with pid 292017 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 292017 00:08:29.194 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 292017 00:08:29.454 00:08:29.454 real 0m1.883s 00:08:29.454 user 0m2.004s 00:08:29.454 sys 0m0.636s 00:08:29.454 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.454 08:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:29.454 ************************************ 00:08:29.454 END TEST locking_app_on_locked_coremask 00:08:29.454 ************************************ 00:08:29.454 08:44:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:29.454 08:44:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.454 08:44:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.454 08:44:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.712 ************************************ 00:08:29.712 START TEST locking_overlapped_coremask 00:08:29.712 ************************************ 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=292489 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 292489 /var/tmp/spdk.sock 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 292489 ']' 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.712 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:29.712 [2024-11-06 08:44:52.545871] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:29.712 [2024-11-06 08:44:52.545914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292489 ] 00:08:29.712 [2024-11-06 08:44:52.620457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.712 [2024-11-06 08:44:52.663225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.712 [2024-11-06 08:44:52.663293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.712 [2024-11-06 08:44:52.663293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=292508 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 292508 /var/tmp/spdk2.sock 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 292508 /var/tmp/spdk2.sock 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 292508 /var/tmp/spdk2.sock 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 292508 ']' 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:29.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.971 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:29.971 [2024-11-06 08:44:52.929094] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:29.971 [2024-11-06 08:44:52.929138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292508 ] 00:08:30.230 [2024-11-06 08:44:53.019962] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 292489 has claimed it. 00:08:30.230 [2024-11-06 08:44:53.019999] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:30.797 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (292508) - No such process 00:08:30.797 ERROR: process (pid: 292508) is no longer running 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 292489 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 292489 ']' 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 292489 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 292489 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 292489' 00:08:30.797 killing process with pid 292489 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 292489 00:08:30.797 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 292489 00:08:31.055 00:08:31.055 real 0m1.433s 00:08:31.055 user 0m3.940s 00:08:31.055 sys 0m0.393s 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:31.055 ************************************ 00:08:31.055 END TEST locking_overlapped_coremask 00:08:31.055 ************************************ 00:08:31.055 08:44:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:31.055 08:44:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.055 08:44:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.055 08:44:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:31.055 ************************************ 00:08:31.055 START TEST locking_overlapped_coremask_via_rpc 00:08:31.055 ************************************ 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=292764 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 292764 /var/tmp/spdk.sock 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 292764 ']' 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.055 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.056 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.056 08:44:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.056 [2024-11-06 08:44:54.045823] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:31.056 [2024-11-06 08:44:54.045861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292764 ] 00:08:31.314 [2024-11-06 08:44:54.118238] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:31.314 [2024-11-06 08:44:54.118261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.314 [2024-11-06 08:44:54.162803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.314 [2024-11-06 08:44:54.162911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.314 [2024-11-06 08:44:54.162911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=292899 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 292899 /var/tmp/spdk2.sock 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 292899 ']' 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:31.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.882 08:44:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.141 [2024-11-06 08:44:54.944790] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:32.141 [2024-11-06 08:44:54.944845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292899 ] 00:08:32.141 [2024-11-06 08:44:55.038871] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:32.141 [2024-11-06 08:44:55.038897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.141 [2024-11-06 08:44:55.127270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.141 [2024-11-06 08:44:55.131250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.141 [2024-11-06 08:44:55.131251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:33.078 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.079 [2024-11-06 08:44:55.792268] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 292764 has claimed it. 00:08:33.079 request: 00:08:33.079 { 00:08:33.079 "method": "framework_enable_cpumask_locks", 00:08:33.079 "req_id": 1 00:08:33.079 } 00:08:33.079 Got JSON-RPC error response 00:08:33.079 response: 00:08:33.079 { 00:08:33.079 "code": -32603, 00:08:33.079 "message": "Failed to claim CPU core: 2" 00:08:33.079 } 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 292764 /var/tmp/spdk.sock 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 292764 ']' 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.079 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.079 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.079 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:33.079 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 292899 /var/tmp/spdk2.sock 00:08:33.079 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 292899 ']' 00:08:33.079 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:33.079 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.079 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:33.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:33.079 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.079 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.338 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:33.338 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:33.338 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:33.338 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:33.338 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:33.338 00:08:33.338 real 0m2.213s 00:08:33.338 user 0m0.993s 00:08:33.338 sys 0m0.165s 00:08:33.338 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.338 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.338 ************************************ 00:08:33.338 END TEST locking_overlapped_coremask_via_rpc 00:08:33.338 ************************************ 00:08:33.338 08:44:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:33.338 08:44:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 292764 ]] 00:08:33.338 08:44:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 292764 00:08:33.338 08:44:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 292764 ']' 00:08:33.339 08:44:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 292764 00:08:33.339 08:44:56 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:33.339 08:44:56 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.339 08:44:56 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 292764 00:08:33.339 08:44:56 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.339 08:44:56 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.339 08:44:56 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 292764' 00:08:33.339 killing process with pid 292764 00:08:33.339 08:44:56 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 292764 00:08:33.339 08:44:56 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 292764 00:08:33.598 08:44:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 292899 ]] 00:08:33.598 08:44:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 292899 00:08:33.598 08:44:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 292899 ']' 00:08:33.598 08:44:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 292899 00:08:33.598 08:44:56 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:33.598 08:44:56 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.598 08:44:56 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 292899 00:08:33.857 08:44:56 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:33.857 08:44:56 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:33.857 08:44:56 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 292899' 00:08:33.857 killing process with pid 292899 00:08:33.857 08:44:56 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 292899 00:08:33.857 08:44:56 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 292899 00:08:34.116 08:44:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:34.116 08:44:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:34.116 08:44:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 292764 ]] 00:08:34.116 08:44:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 292764 00:08:34.116 08:44:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 292764 ']' 00:08:34.116 08:44:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 292764 00:08:34.116 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (292764) - No such process 00:08:34.116 08:44:56 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 292764 is not found' 00:08:34.116 Process with pid 292764 is not found 00:08:34.116 08:44:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 292899 ]] 00:08:34.116 08:44:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 292899 00:08:34.116 08:44:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 292899 ']' 00:08:34.116 08:44:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 292899 00:08:34.116 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (292899) - No such process 00:08:34.116 08:44:56 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 292899 is not found' 00:08:34.116 Process with pid 292899 is not found 00:08:34.116 08:44:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:34.116 00:08:34.116 real 0m15.319s 00:08:34.116 user 0m26.949s 00:08:34.116 sys 0m5.052s 00:08:34.116 08:44:56 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.116 08:44:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.116 ************************************ 00:08:34.116 END TEST cpu_locks 00:08:34.116 ************************************ 00:08:34.116 00:08:34.116 real 0m40.304s 00:08:34.116 user 1m17.339s 00:08:34.116 sys 0m8.529s 00:08:34.116 08:44:57 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.116 08:44:57 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.116 ************************************ 00:08:34.116 END TEST event 00:08:34.116 ************************************ 00:08:34.116 08:44:57 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:34.116 08:44:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.116 08:44:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.116 08:44:57 -- common/autotest_common.sh@10 -- # set +x 00:08:34.116 ************************************ 00:08:34.116 START TEST thread 00:08:34.116 ************************************ 00:08:34.116 08:44:57 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:34.375 * Looking for test storage... 00:08:34.375 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:08:34.375 08:44:57 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:34.375 08:44:57 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:08:34.375 08:44:57 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:34.375 08:44:57 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:34.375 08:44:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.375 08:44:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.375 08:44:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.375 08:44:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.375 08:44:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.375 08:44:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.375 08:44:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.375 08:44:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.375 08:44:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.375 08:44:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.375 08:44:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.375 08:44:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:34.375 08:44:57 thread -- scripts/common.sh@345 -- # : 1 00:08:34.375 08:44:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.375 08:44:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.375 08:44:57 thread -- scripts/common.sh@365 -- # decimal 1 00:08:34.375 08:44:57 thread -- scripts/common.sh@353 -- # local d=1 00:08:34.375 08:44:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.375 08:44:57 thread -- scripts/common.sh@355 -- # echo 1 00:08:34.375 08:44:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.376 08:44:57 thread -- scripts/common.sh@366 -- # decimal 2 00:08:34.376 08:44:57 thread -- scripts/common.sh@353 -- # local d=2 00:08:34.376 08:44:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.376 08:44:57 thread -- scripts/common.sh@355 -- # echo 2 00:08:34.376 08:44:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.376 08:44:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.376 08:44:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.376 08:44:57 thread -- scripts/common.sh@368 -- # return 0 00:08:34.376 08:44:57 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.376 08:44:57 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:34.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.376 --rc genhtml_branch_coverage=1 00:08:34.376 --rc genhtml_function_coverage=1 00:08:34.376 --rc genhtml_legend=1 00:08:34.376 --rc geninfo_all_blocks=1 00:08:34.376 --rc geninfo_unexecuted_blocks=1 00:08:34.376 00:08:34.376 ' 00:08:34.376 08:44:57 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:34.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.376 --rc genhtml_branch_coverage=1 00:08:34.376 --rc genhtml_function_coverage=1 00:08:34.376 --rc genhtml_legend=1 00:08:34.376 --rc geninfo_all_blocks=1 00:08:34.376 --rc geninfo_unexecuted_blocks=1 00:08:34.376 00:08:34.376 ' 00:08:34.376 08:44:57 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:34.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.376 --rc genhtml_branch_coverage=1 00:08:34.376 --rc genhtml_function_coverage=1 00:08:34.376 --rc genhtml_legend=1 00:08:34.376 --rc geninfo_all_blocks=1 00:08:34.376 --rc geninfo_unexecuted_blocks=1 00:08:34.376 00:08:34.376 ' 00:08:34.376 08:44:57 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:34.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.376 --rc genhtml_branch_coverage=1 00:08:34.376 --rc genhtml_function_coverage=1 00:08:34.376 --rc genhtml_legend=1 00:08:34.376 --rc geninfo_all_blocks=1 00:08:34.376 --rc geninfo_unexecuted_blocks=1 00:08:34.376 00:08:34.376 ' 00:08:34.376 08:44:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:34.376 08:44:57 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:34.376 08:44:57 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.376 08:44:57 thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.376 ************************************ 00:08:34.376 START TEST thread_poller_perf 00:08:34.376 ************************************ 00:08:34.376 08:44:57 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:34.376 [2024-11-06 08:44:57.306432] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:34.376 [2024-11-06 08:44:57.306509] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293339 ] 00:08:34.376 [2024-11-06 08:44:57.382930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.635 [2024-11-06 08:44:57.422759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.635 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:35.571 [2024-11-06T07:44:58.585Z] ====================================== 00:08:35.571 [2024-11-06T07:44:58.585Z] busy:2108832232 (cyc) 00:08:35.571 [2024-11-06T07:44:58.585Z] total_run_count: 423000 00:08:35.571 [2024-11-06T07:44:58.585Z] tsc_hz: 2100000000 (cyc) 00:08:35.571 [2024-11-06T07:44:58.585Z] ====================================== 00:08:35.571 [2024-11-06T07:44:58.585Z] poller_cost: 4985 (cyc), 2373 (nsec) 00:08:35.571 00:08:35.571 real 0m1.182s 00:08:35.571 user 0m1.108s 00:08:35.571 sys 0m0.069s 00:08:35.571 08:44:58 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.571 08:44:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:35.571 ************************************ 00:08:35.571 END TEST thread_poller_perf 00:08:35.571 ************************************ 00:08:35.571 08:44:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:35.571 08:44:58 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:35.571 08:44:58 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.571 08:44:58 thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.571 ************************************ 00:08:35.571 START TEST thread_poller_perf 00:08:35.571 ************************************ 00:08:35.571 08:44:58 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:35.571 [2024-11-06 08:44:58.560364] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:35.572 [2024-11-06 08:44:58.560438] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293592 ] 00:08:35.831 [2024-11-06 08:44:58.636724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.831 [2024-11-06 08:44:58.673381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.831 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:36.767 [2024-11-06T07:44:59.781Z] ====================================== 00:08:36.767 [2024-11-06T07:44:59.781Z] busy:2101536032 (cyc) 00:08:36.767 [2024-11-06T07:44:59.781Z] total_run_count: 5599000 00:08:36.767 [2024-11-06T07:44:59.781Z] tsc_hz: 2100000000 (cyc) 00:08:36.767 [2024-11-06T07:44:59.781Z] ====================================== 00:08:36.767 [2024-11-06T07:44:59.781Z] poller_cost: 375 (cyc), 178 (nsec) 00:08:36.767 00:08:36.767 real 0m1.171s 00:08:36.767 user 0m1.095s 00:08:36.767 sys 0m0.072s 00:08:36.767 08:44:59 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.767 08:44:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:36.767 ************************************ 00:08:36.767 END TEST thread_poller_perf 00:08:36.767 ************************************ 00:08:36.767 08:44:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:36.767 00:08:36.767 real 0m2.671s 00:08:36.767 user 0m2.361s 00:08:36.767 sys 0m0.323s 00:08:36.767 08:44:59 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.767 08:44:59 thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.767 ************************************ 00:08:36.767 END TEST thread 00:08:36.767 ************************************ 00:08:36.767 08:44:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:36.767 08:44:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.026 08:44:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.026 08:44:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.026 08:44:59 -- common/autotest_common.sh@10 -- # set +x 00:08:37.026 ************************************ 00:08:37.026 START TEST app_cmdline 00:08:37.026 ************************************ 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.026 * Looking for test storage... 00:08:37.026 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.026 08:44:59 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:37.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.026 --rc genhtml_branch_coverage=1 00:08:37.026 --rc genhtml_function_coverage=1 00:08:37.026 --rc genhtml_legend=1 00:08:37.026 --rc geninfo_all_blocks=1 00:08:37.026 --rc geninfo_unexecuted_blocks=1 00:08:37.026 00:08:37.026 ' 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:37.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.026 --rc genhtml_branch_coverage=1 00:08:37.026 --rc genhtml_function_coverage=1 00:08:37.026 --rc genhtml_legend=1 00:08:37.026 --rc geninfo_all_blocks=1 00:08:37.026 --rc geninfo_unexecuted_blocks=1 00:08:37.026 00:08:37.026 ' 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:37.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.026 --rc genhtml_branch_coverage=1 00:08:37.026 --rc genhtml_function_coverage=1 00:08:37.026 --rc genhtml_legend=1 00:08:37.026 --rc geninfo_all_blocks=1 00:08:37.026 --rc geninfo_unexecuted_blocks=1 00:08:37.026 00:08:37.026 ' 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:37.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.026 --rc genhtml_branch_coverage=1 00:08:37.026 --rc genhtml_function_coverage=1 00:08:37.026 --rc genhtml_legend=1 00:08:37.026 --rc geninfo_all_blocks=1 00:08:37.026 --rc geninfo_unexecuted_blocks=1 00:08:37.026 00:08:37.026 ' 00:08:37.026 08:44:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:37.026 08:44:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=293892 00:08:37.026 08:44:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 293892 00:08:37.026 08:44:59 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 293892 ']' 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.026 08:44:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:37.026 [2024-11-06 08:45:00.035763] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:37.026 [2024-11-06 08:45:00.035824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293892 ] 00:08:37.285 [2024-11-06 08:45:00.111909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.285 [2024-11-06 08:45:00.153491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.545 08:45:00 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.545 08:45:00 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:37.545 08:45:00 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:37.545 { 00:08:37.545 "version": "SPDK v25.01-pre git sha1 ca5713c38", 00:08:37.545 "fields": { 00:08:37.545 "major": 25, 00:08:37.545 "minor": 1, 00:08:37.545 "patch": 0, 00:08:37.545 "suffix": "-pre", 00:08:37.545 "commit": "ca5713c38" 00:08:37.545 } 00:08:37.545 } 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:37.804 request: 00:08:37.804 { 00:08:37.804 "method": "env_dpdk_get_mem_stats", 00:08:37.804 "req_id": 1 00:08:37.804 } 00:08:37.804 Got JSON-RPC error response 00:08:37.804 response: 00:08:37.804 { 00:08:37.804 "code": -32601, 00:08:37.804 "message": "Method not found" 00:08:37.804 } 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.804 08:45:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 293892 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 293892 ']' 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 293892 00:08:37.804 08:45:00 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:38.063 08:45:00 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.063 08:45:00 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 293892 00:08:38.063 08:45:00 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.063 08:45:00 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.063 08:45:00 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 293892' 00:08:38.063 killing process with pid 293892 00:08:38.063 08:45:00 app_cmdline -- common/autotest_common.sh@969 -- # kill 293892 00:08:38.063 08:45:00 app_cmdline -- common/autotest_common.sh@974 -- # wait 293892 00:08:38.322 00:08:38.322 real 0m1.353s 00:08:38.322 user 0m1.590s 00:08:38.322 sys 0m0.446s 00:08:38.322 08:45:01 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.322 08:45:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.322 ************************************ 00:08:38.322 END TEST app_cmdline 00:08:38.322 ************************************ 00:08:38.322 08:45:01 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:38.323 08:45:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.323 08:45:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.323 08:45:01 -- common/autotest_common.sh@10 -- # set +x 00:08:38.323 ************************************ 00:08:38.323 START TEST version 00:08:38.323 ************************************ 00:08:38.323 08:45:01 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:38.323 * Looking for test storage... 00:08:38.323 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:38.323 08:45:01 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:38.323 08:45:01 version -- common/autotest_common.sh@1689 -- # lcov --version 00:08:38.323 08:45:01 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:38.582 08:45:01 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:38.582 08:45:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.582 08:45:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.582 08:45:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.582 08:45:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.582 08:45:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.582 08:45:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.582 08:45:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.582 08:45:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.582 08:45:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.582 08:45:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.582 08:45:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.582 08:45:01 version -- scripts/common.sh@344 -- # case "$op" in 00:08:38.582 08:45:01 version -- scripts/common.sh@345 -- # : 1 00:08:38.582 08:45:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.582 08:45:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.582 08:45:01 version -- scripts/common.sh@365 -- # decimal 1 00:08:38.582 08:45:01 version -- scripts/common.sh@353 -- # local d=1 00:08:38.582 08:45:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.582 08:45:01 version -- scripts/common.sh@355 -- # echo 1 00:08:38.582 08:45:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.582 08:45:01 version -- scripts/common.sh@366 -- # decimal 2 00:08:38.582 08:45:01 version -- scripts/common.sh@353 -- # local d=2 00:08:38.582 08:45:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.582 08:45:01 version -- scripts/common.sh@355 -- # echo 2 00:08:38.582 08:45:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.582 08:45:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.582 08:45:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.582 08:45:01 version -- scripts/common.sh@368 -- # return 0 00:08:38.582 08:45:01 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.582 08:45:01 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.582 --rc genhtml_branch_coverage=1 00:08:38.582 --rc genhtml_function_coverage=1 00:08:38.582 --rc genhtml_legend=1 00:08:38.582 --rc geninfo_all_blocks=1 00:08:38.582 --rc geninfo_unexecuted_blocks=1 00:08:38.582 00:08:38.582 ' 00:08:38.582 08:45:01 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.582 --rc genhtml_branch_coverage=1 00:08:38.582 --rc genhtml_function_coverage=1 00:08:38.582 --rc genhtml_legend=1 00:08:38.582 --rc geninfo_all_blocks=1 00:08:38.582 --rc geninfo_unexecuted_blocks=1 00:08:38.582 00:08:38.582 ' 00:08:38.582 08:45:01 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.582 --rc genhtml_branch_coverage=1 00:08:38.582 --rc genhtml_function_coverage=1 00:08:38.582 --rc genhtml_legend=1 00:08:38.582 --rc geninfo_all_blocks=1 00:08:38.582 --rc geninfo_unexecuted_blocks=1 00:08:38.582 00:08:38.582 ' 00:08:38.582 08:45:01 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.582 --rc genhtml_branch_coverage=1 00:08:38.582 --rc genhtml_function_coverage=1 00:08:38.582 --rc genhtml_legend=1 00:08:38.582 --rc geninfo_all_blocks=1 00:08:38.582 --rc geninfo_unexecuted_blocks=1 00:08:38.582 00:08:38.582 ' 00:08:38.582 08:45:01 version -- app/version.sh@17 -- # get_header_version major 00:08:38.582 08:45:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:38.582 08:45:01 version -- app/version.sh@14 -- # cut -f2 00:08:38.582 08:45:01 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.582 08:45:01 version -- app/version.sh@17 -- # major=25 00:08:38.582 08:45:01 version -- app/version.sh@18 -- # get_header_version minor 00:08:38.582 08:45:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:38.582 08:45:01 version -- app/version.sh@14 -- # cut -f2 00:08:38.582 08:45:01 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.582 08:45:01 version -- app/version.sh@18 -- # minor=1 00:08:38.582 08:45:01 version -- app/version.sh@19 -- # get_header_version patch 00:08:38.582 08:45:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:38.582 08:45:01 version -- app/version.sh@14 -- # cut -f2 00:08:38.582 08:45:01 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.582 08:45:01 version -- app/version.sh@19 -- # patch=0 00:08:38.582 08:45:01 version -- app/version.sh@20 -- # get_header_version suffix 00:08:38.582 08:45:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:38.583 08:45:01 version -- app/version.sh@14 -- # cut -f2 00:08:38.583 08:45:01 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.583 08:45:01 version -- app/version.sh@20 -- # suffix=-pre 00:08:38.583 08:45:01 version -- app/version.sh@22 -- # version=25.1 00:08:38.583 08:45:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:38.583 08:45:01 version -- app/version.sh@28 -- # version=25.1rc0 00:08:38.583 08:45:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:38.583 08:45:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:38.583 08:45:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:38.583 08:45:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:38.583 00:08:38.583 real 0m0.222s 00:08:38.583 user 0m0.138s 00:08:38.583 sys 0m0.126s 00:08:38.583 08:45:01 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.583 08:45:01 version -- common/autotest_common.sh@10 -- # set +x 00:08:38.583 ************************************ 00:08:38.583 END TEST version 00:08:38.583 ************************************ 00:08:38.583 08:45:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:38.583 08:45:01 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:38.583 08:45:01 -- spdk/autotest.sh@194 -- # uname -s 00:08:38.583 08:45:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:38.583 08:45:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:38.583 08:45:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:38.583 08:45:01 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:38.583 08:45:01 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:38.583 08:45:01 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:38.583 08:45:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:38.583 08:45:01 -- common/autotest_common.sh@10 -- # set +x 00:08:38.583 08:45:01 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:38.583 08:45:01 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:38.583 08:45:01 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:38.583 08:45:01 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:38.583 08:45:01 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:08:38.583 08:45:01 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:38.583 08:45:01 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.583 08:45:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.583 08:45:01 -- common/autotest_common.sh@10 -- # set +x 00:08:38.583 ************************************ 00:08:38.583 START TEST nvmf_rdma 00:08:38.583 ************************************ 00:08:38.583 08:45:01 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:38.842 * Looking for test storage... 00:08:38.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:38.842 08:45:01 nvmf_rdma -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:38.842 08:45:01 nvmf_rdma -- common/autotest_common.sh@1689 -- # lcov --version 00:08:38.842 08:45:01 nvmf_rdma -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:38.842 08:45:01 nvmf_rdma -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.842 08:45:01 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:08:38.842 08:45:01 nvmf_rdma -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.842 08:45:01 nvmf_rdma -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:38.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.842 --rc genhtml_branch_coverage=1 00:08:38.842 --rc genhtml_function_coverage=1 00:08:38.842 --rc genhtml_legend=1 00:08:38.842 --rc geninfo_all_blocks=1 00:08:38.842 --rc geninfo_unexecuted_blocks=1 00:08:38.842 00:08:38.842 ' 00:08:38.842 08:45:01 nvmf_rdma -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:38.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.842 --rc genhtml_branch_coverage=1 00:08:38.842 --rc genhtml_function_coverage=1 00:08:38.842 --rc genhtml_legend=1 00:08:38.842 --rc geninfo_all_blocks=1 00:08:38.842 --rc geninfo_unexecuted_blocks=1 00:08:38.842 00:08:38.842 ' 00:08:38.842 08:45:01 nvmf_rdma -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:38.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.842 --rc genhtml_branch_coverage=1 00:08:38.842 --rc genhtml_function_coverage=1 00:08:38.842 --rc genhtml_legend=1 00:08:38.843 --rc geninfo_all_blocks=1 00:08:38.843 --rc geninfo_unexecuted_blocks=1 00:08:38.843 00:08:38.843 ' 00:08:38.843 08:45:01 nvmf_rdma -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:38.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.843 --rc genhtml_branch_coverage=1 00:08:38.843 --rc genhtml_function_coverage=1 00:08:38.843 --rc genhtml_legend=1 00:08:38.843 --rc geninfo_all_blocks=1 00:08:38.843 --rc geninfo_unexecuted_blocks=1 00:08:38.843 00:08:38.843 ' 00:08:38.843 08:45:01 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:08:38.843 08:45:01 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:38.843 08:45:01 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:38.843 08:45:01 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.843 08:45:01 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.843 08:45:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:38.843 ************************************ 00:08:38.843 START TEST nvmf_target_core 00:08:38.843 ************************************ 00:08:38.843 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:39.102 * Looking for test storage... 00:08:39.102 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1689 -- # lcov --version 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.102 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:39.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.102 --rc genhtml_branch_coverage=1 00:08:39.102 --rc genhtml_function_coverage=1 00:08:39.102 --rc genhtml_legend=1 00:08:39.102 --rc geninfo_all_blocks=1 00:08:39.103 --rc geninfo_unexecuted_blocks=1 00:08:39.103 00:08:39.103 ' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:39.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.103 --rc genhtml_branch_coverage=1 00:08:39.103 --rc genhtml_function_coverage=1 00:08:39.103 --rc genhtml_legend=1 00:08:39.103 --rc geninfo_all_blocks=1 00:08:39.103 --rc geninfo_unexecuted_blocks=1 00:08:39.103 00:08:39.103 ' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:39.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.103 --rc genhtml_branch_coverage=1 00:08:39.103 --rc genhtml_function_coverage=1 00:08:39.103 --rc genhtml_legend=1 00:08:39.103 --rc geninfo_all_blocks=1 00:08:39.103 --rc geninfo_unexecuted_blocks=1 00:08:39.103 00:08:39.103 ' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:39.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.103 --rc genhtml_branch_coverage=1 00:08:39.103 --rc genhtml_function_coverage=1 00:08:39.103 --rc genhtml_legend=1 00:08:39.103 --rc geninfo_all_blocks=1 00:08:39.103 --rc geninfo_unexecuted_blocks=1 00:08:39.103 00:08:39.103 ' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.103 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.103 08:45:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.103 ************************************ 00:08:39.103 START TEST nvmf_abort 00:08:39.103 ************************************ 00:08:39.103 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:39.103 * Looking for test storage... 00:08:39.103 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:39.103 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:39.103 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # lcov --version 00:08:39.103 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.362 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:39.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.363 --rc genhtml_branch_coverage=1 00:08:39.363 --rc genhtml_function_coverage=1 00:08:39.363 --rc genhtml_legend=1 00:08:39.363 --rc geninfo_all_blocks=1 00:08:39.363 --rc geninfo_unexecuted_blocks=1 00:08:39.363 00:08:39.363 ' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:39.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.363 --rc genhtml_branch_coverage=1 00:08:39.363 --rc genhtml_function_coverage=1 00:08:39.363 --rc genhtml_legend=1 00:08:39.363 --rc geninfo_all_blocks=1 00:08:39.363 --rc geninfo_unexecuted_blocks=1 00:08:39.363 00:08:39.363 ' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:39.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.363 --rc genhtml_branch_coverage=1 00:08:39.363 --rc genhtml_function_coverage=1 00:08:39.363 --rc genhtml_legend=1 00:08:39.363 --rc geninfo_all_blocks=1 00:08:39.363 --rc geninfo_unexecuted_blocks=1 00:08:39.363 00:08:39.363 ' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:39.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.363 --rc genhtml_branch_coverage=1 00:08:39.363 --rc genhtml_function_coverage=1 00:08:39.363 --rc genhtml_legend=1 00:08:39.363 --rc geninfo_all_blocks=1 00:08:39.363 --rc geninfo_unexecuted_blocks=1 00:08:39.363 00:08:39.363 ' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.363 08:45:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:45.936 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:45.936 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:45.936 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:45.937 Found net devices under 0000:da:00.0: mlx_0_0 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:45.937 Found net devices under 0000:da:00.1: mlx_0_1 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # rdma_device_init 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:45.937 08:45:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:45.937 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:45.937 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:45.937 altname enp218s0f0np0 00:08:45.937 altname ens818f0np0 00:08:45.937 inet 192.168.100.8/24 scope global mlx_0_0 00:08:45.937 valid_lft forever preferred_lft forever 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:45.937 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:45.937 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:45.937 altname enp218s0f1np1 00:08:45.937 altname ens818f1np1 00:08:45.937 inet 192.168.100.9/24 scope global mlx_0_1 00:08:45.937 valid_lft forever preferred_lft forever 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:45.937 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:45.938 192.168.100.9' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:45.938 192.168.100.9' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # head -n 1 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:45.938 192.168.100.9' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # tail -n +2 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # head -n 1 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=297753 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 297753 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 297753 ']' 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 [2024-11-06 08:45:08.174656] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:45.938 [2024-11-06 08:45:08.174704] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.938 [2024-11-06 08:45:08.253315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.938 [2024-11-06 08:45:08.293950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.938 [2024-11-06 08:45:08.293989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.938 [2024-11-06 08:45:08.293997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.938 [2024-11-06 08:45:08.294006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.938 [2024-11-06 08:45:08.294028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.938 [2024-11-06 08:45:08.295365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.938 [2024-11-06 08:45:08.295474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.938 [2024-11-06 08:45:08.295475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 [2024-11-06 08:45:08.472960] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe86530/0xe8aa20) succeed. 00:08:45.938 [2024-11-06 08:45:08.492631] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe87b20/0xecc0c0) succeed. 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 Malloc0 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 Delay0 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 [2024-11-06 08:45:08.659509] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.938 08:45:08 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:45.938 [2024-11-06 08:45:08.776228] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:48.474 Initializing NVMe Controllers 00:08:48.474 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:48.474 controller IO queue size 128 less than required 00:08:48.474 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:48.474 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:48.474 Initialization complete. Launching workers. 00:08:48.474 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42933 00:08:48.474 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42994, failed to submit 62 00:08:48.474 success 42934, unsuccessful 60, failed 0 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:48.474 rmmod nvme_rdma 00:08:48.474 rmmod nvme_fabrics 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 297753 ']' 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 297753 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 297753 ']' 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 297753 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.474 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 297753 00:08:48.475 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:48.475 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:48.475 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 297753' 00:08:48.475 killing process with pid 297753 00:08:48.475 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 297753 00:08:48.475 08:45:10 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 297753 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:08:48.475 00:08:48.475 real 0m9.212s 00:08:48.475 user 0m12.625s 00:08:48.475 sys 0m4.830s 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:48.475 ************************************ 00:08:48.475 END TEST nvmf_abort 00:08:48.475 ************************************ 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.475 ************************************ 00:08:48.475 START TEST nvmf_ns_hotplug_stress 00:08:48.475 ************************************ 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:48.475 * Looking for test storage... 00:08:48.475 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:48.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.475 --rc genhtml_branch_coverage=1 00:08:48.475 --rc genhtml_function_coverage=1 00:08:48.475 --rc genhtml_legend=1 00:08:48.475 --rc geninfo_all_blocks=1 00:08:48.475 --rc geninfo_unexecuted_blocks=1 00:08:48.475 00:08:48.475 ' 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:48.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.475 --rc genhtml_branch_coverage=1 00:08:48.475 --rc genhtml_function_coverage=1 00:08:48.475 --rc genhtml_legend=1 00:08:48.475 --rc geninfo_all_blocks=1 00:08:48.475 --rc geninfo_unexecuted_blocks=1 00:08:48.475 00:08:48.475 ' 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:48.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.475 --rc genhtml_branch_coverage=1 00:08:48.475 --rc genhtml_function_coverage=1 00:08:48.475 --rc genhtml_legend=1 00:08:48.475 --rc geninfo_all_blocks=1 00:08:48.475 --rc geninfo_unexecuted_blocks=1 00:08:48.475 00:08:48.475 ' 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:48.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.475 --rc genhtml_branch_coverage=1 00:08:48.475 --rc genhtml_function_coverage=1 00:08:48.475 --rc genhtml_legend=1 00:08:48.475 --rc geninfo_all_blocks=1 00:08:48.475 --rc geninfo_unexecuted_blocks=1 00:08:48.475 00:08:48.475 ' 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.475 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.735 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.735 08:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:55.313 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:55.313 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:55.313 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:55.314 Found net devices under 0000:da:00.0: mlx_0_0 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:55.314 Found net devices under 0000:da:00.1: mlx_0_1 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # rdma_device_init 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:55.314 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:55.314 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:55.314 altname enp218s0f0np0 00:08:55.314 altname ens818f0np0 00:08:55.314 inet 192.168.100.8/24 scope global mlx_0_0 00:08:55.314 valid_lft forever preferred_lft forever 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:55.314 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:55.314 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:55.314 altname enp218s0f1np1 00:08:55.314 altname ens818f1np1 00:08:55.314 inet 192.168.100.9/24 scope global mlx_0_1 00:08:55.314 valid_lft forever preferred_lft forever 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:55.314 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:55.315 192.168.100.9' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:55.315 192.168.100.9' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # head -n 1 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:55.315 192.168.100.9' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # tail -n +2 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # head -n 1 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=301870 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 301870 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 301870 ']' 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.315 [2024-11-06 08:45:17.504169] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:55.315 [2024-11-06 08:45:17.504228] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.315 [2024-11-06 08:45:17.580810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:55.315 [2024-11-06 08:45:17.620951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.315 [2024-11-06 08:45:17.620986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.315 [2024-11-06 08:45:17.620997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.315 [2024-11-06 08:45:17.621002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.315 [2024-11-06 08:45:17.621007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.315 [2024-11-06 08:45:17.622421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.315 [2024-11-06 08:45:17.622528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.315 [2024-11-06 08:45:17.622529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:55.315 08:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:55.315 [2024-11-06 08:45:17.947692] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19b9530/0x19bda20) succeed. 00:08:55.315 [2024-11-06 08:45:17.956466] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19bab20/0x19ff0c0) succeed. 00:08:55.315 08:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:55.315 08:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:55.574 [2024-11-06 08:45:18.442412] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:55.574 08:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:55.832 08:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:56.091 Malloc0 00:08:56.091 08:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:56.091 Delay0 00:08:56.091 08:45:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.350 08:45:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:56.608 NULL1 00:08:56.608 08:45:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:56.866 08:45:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=302141 00:08:56.866 08:45:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:56.866 08:45:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:08:56.866 08:45:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.256 Read completed with error (sct=0, sc=11) 00:08:58.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.256 08:45:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.256 08:45:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:58.256 08:45:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:58.256 true 00:08:58.256 08:45:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:08:58.256 08:45:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.193 08:45:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.452 08:45:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:59.452 08:45:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:59.452 true 00:08:59.452 08:45:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:08:59.452 08:45:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.389 08:45:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.648 08:45:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:00.648 08:45:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:00.648 true 00:09:00.648 08:45:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:00.648 08:45:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.609 08:45:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.894 08:45:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:01.894 08:45:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:01.894 true 00:09:01.894 08:45:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:01.894 08:45:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.918 08:45:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.918 08:45:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:02.918 08:45:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:03.200 true 00:09:03.200 08:45:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:03.200 08:45:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.119 08:45:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.119 08:45:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:04.119 08:45:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:04.379 true 00:09:04.379 08:45:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:04.379 08:45:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.639 08:45:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.898 08:45:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:04.898 08:45:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:04.898 true 00:09:04.898 08:45:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:04.898 08:45:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 08:45:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 08:45:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:06.275 08:45:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:06.534 true 00:09:06.535 08:45:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:06.535 08:45:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.471 08:45:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.471 08:45:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:07.471 08:45:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:07.729 true 00:09:07.729 08:45:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:07.729 08:45:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 08:45:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 08:45:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:08.666 08:45:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:08.925 true 00:09:08.925 08:45:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:08.925 08:45:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.862 08:45:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.863 08:45:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:09.863 08:45:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:10.121 true 00:09:10.121 08:45:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:10.121 08:45:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.058 08:45:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.058 08:45:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:11.058 08:45:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:11.317 true 00:09:11.317 08:45:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:11.317 08:45:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.254 08:45:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.254 08:45:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:12.254 08:45:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:12.513 true 00:09:12.513 08:45:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:12.513 08:45:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.771 08:45:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.771 08:45:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:12.772 08:45:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:13.030 true 00:09:13.030 08:45:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:13.030 08:45:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.408 08:45:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.408 08:45:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:14.408 08:45:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:14.408 true 00:09:14.667 08:45:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:14.667 08:45:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.234 08:45:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.492 08:45:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:15.492 08:45:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:15.751 true 00:09:15.751 08:45:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:15.751 08:45:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.689 08:45:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.689 08:45:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:16.689 08:45:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:16.948 true 00:09:16.948 08:45:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:16.948 08:45:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.884 08:45:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.884 08:45:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:17.884 08:45:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:18.143 true 00:09:18.143 08:45:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:18.143 08:45:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.080 08:45:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.080 08:45:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:19.080 08:45:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:19.339 true 00:09:19.339 08:45:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:19.339 08:45:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.277 08:45:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.277 08:45:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:20.277 08:45:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:20.535 true 00:09:20.535 08:45:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:20.535 08:45:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.794 08:45:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.053 08:45:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:21.053 08:45:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:21.053 true 00:09:21.312 08:45:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:21.312 08:45:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.250 08:45:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.509 08:45:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:22.509 08:45:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:22.509 true 00:09:22.509 08:45:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:22.509 08:45:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.447 08:45:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.706 08:45:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:23.706 08:45:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:23.706 true 00:09:23.706 08:45:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:23.706 08:45:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.643 08:45:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.901 08:45:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:24.901 08:45:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:24.901 true 00:09:25.160 08:45:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:25.160 08:45:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.986 08:45:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.986 08:45:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:25.986 08:45:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:26.244 true 00:09:26.244 08:45:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:26.244 08:45:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.183 08:45:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.183 08:45:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:27.183 08:45:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:27.441 true 00:09:27.441 08:45:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:27.441 08:45:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.700 08:45:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.958 08:45:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:27.958 08:45:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:27.958 true 00:09:28.217 08:45:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:28.217 08:45:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.217 08:45:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.475 08:45:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:28.476 08:45:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:28.734 true 00:09:28.734 08:45:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:28.734 08:45:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.993 08:45:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.993 Initializing NVMe Controllers 00:09:28.993 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:28.993 Controller IO queue size 128, less than required. 00:09:28.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:28.993 Controller IO queue size 128, less than required. 00:09:28.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:28.993 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:28.993 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:28.993 Initialization complete. Launching workers. 00:09:28.993 ======================================================== 00:09:28.993 Latency(us) 00:09:28.993 Device Information : IOPS MiB/s Average min max 00:09:28.993 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5376.73 2.63 21544.02 901.21 1007117.00 00:09:28.993 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34917.00 17.05 3665.62 2325.52 293473.09 00:09:28.993 ======================================================== 00:09:28.993 Total : 40293.73 19.67 6051.29 901.21 1007117.00 00:09:28.993 00:09:28.993 08:45:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:28.993 08:45:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:29.252 true 00:09:29.252 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 302141 00:09:29.252 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (302141) - No such process 00:09:29.252 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 302141 00:09:29.252 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.511 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:29.771 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:29.771 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:29.771 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:29.771 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:29.771 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:29.771 null0 00:09:29.771 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:29.771 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:29.771 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:30.030 null1 00:09:30.030 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:30.030 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:30.030 08:45:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:30.288 null2 00:09:30.288 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:30.288 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:30.288 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:30.547 null3 00:09:30.547 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:30.547 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:30.547 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:30.547 null4 00:09:30.547 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:30.547 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:30.547 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:30.807 null5 00:09:30.807 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:30.807 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:30.807 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:31.066 null6 00:09:31.066 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:31.066 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:31.066 08:45:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:31.326 null7 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 307987 307989 307990 307992 307994 307996 307999 308000 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:31.326 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.586 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:31.846 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:31.846 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.846 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:31.846 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:31.846 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:31.846 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:31.846 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:31.846 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.105 08:45:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.365 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.366 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:32.625 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:32.625 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.625 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.625 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:32.625 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:32.625 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:32.625 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:32.625 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.884 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:33.144 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.144 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:33.144 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:33.144 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:33.144 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:33.144 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.144 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:33.144 08:45:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:33.403 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:33.662 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.662 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.663 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:33.922 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.922 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:33.922 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:33.922 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:33.922 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:33.922 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.922 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:33.922 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.181 08:45:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:34.181 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.181 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:34.182 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.182 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.182 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:34.182 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:34.182 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.182 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.441 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:34.700 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:34.700 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:34.700 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.700 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:34.700 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:34.700 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:34.700 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:34.700 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:34.959 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.960 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:35.219 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:35.219 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:35.219 08:45:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:35.219 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:35.478 rmmod nvme_rdma 00:09:35.478 rmmod nvme_fabrics 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 301870 ']' 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 301870 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 301870 ']' 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 301870 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 301870 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 301870' 00:09:35.478 killing process with pid 301870 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 301870 00:09:35.478 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 301870 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:35.738 00:09:35.738 real 0m47.258s 00:09:35.738 user 3m21.015s 00:09:35.738 sys 0m11.769s 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.738 ************************************ 00:09:35.738 END TEST nvmf_ns_hotplug_stress 00:09:35.738 ************************************ 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.738 ************************************ 00:09:35.738 START TEST nvmf_delete_subsystem 00:09:35.738 ************************************ 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:35.738 * Looking for test storage... 00:09:35.738 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lcov --version 00:09:35.738 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:35.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.998 --rc genhtml_branch_coverage=1 00:09:35.998 --rc genhtml_function_coverage=1 00:09:35.998 --rc genhtml_legend=1 00:09:35.998 --rc geninfo_all_blocks=1 00:09:35.998 --rc geninfo_unexecuted_blocks=1 00:09:35.998 00:09:35.998 ' 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:35.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.998 --rc genhtml_branch_coverage=1 00:09:35.998 --rc genhtml_function_coverage=1 00:09:35.998 --rc genhtml_legend=1 00:09:35.998 --rc geninfo_all_blocks=1 00:09:35.998 --rc geninfo_unexecuted_blocks=1 00:09:35.998 00:09:35.998 ' 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:35.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.998 --rc genhtml_branch_coverage=1 00:09:35.998 --rc genhtml_function_coverage=1 00:09:35.998 --rc genhtml_legend=1 00:09:35.998 --rc geninfo_all_blocks=1 00:09:35.998 --rc geninfo_unexecuted_blocks=1 00:09:35.998 00:09:35.998 ' 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:35.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.998 --rc genhtml_branch_coverage=1 00:09:35.998 --rc genhtml_function_coverage=1 00:09:35.998 --rc genhtml_legend=1 00:09:35.998 --rc geninfo_all_blocks=1 00:09:35.998 --rc geninfo_unexecuted_blocks=1 00:09:35.998 00:09:35.998 ' 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.998 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.999 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.999 08:45:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.573 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:42.574 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:42.574 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:42.574 Found net devices under 0000:da:00.0: mlx_0_0 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:42.574 Found net devices under 0000:da:00.1: mlx_0_1 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # rdma_device_init 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@528 -- # allocate_nic_ips 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:42.574 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:42.574 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:09:42.574 altname enp218s0f0np0 00:09:42.574 altname ens818f0np0 00:09:42.574 inet 192.168.100.8/24 scope global mlx_0_0 00:09:42.574 valid_lft forever preferred_lft forever 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:42.574 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:42.574 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:09:42.574 altname enp218s0f1np1 00:09:42.574 altname ens818f1np1 00:09:42.574 inet 192.168.100.9/24 scope global mlx_0_1 00:09:42.574 valid_lft forever preferred_lft forever 00:09:42.574 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:42.575 192.168.100.9' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:42.575 192.168.100.9' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # head -n 1 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:42.575 192.168.100.9' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # tail -n +2 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # head -n 1 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=312140 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 312140 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 312140 ']' 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.575 [2024-11-06 08:46:04.781103] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:42.575 [2024-11-06 08:46:04.781154] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.575 [2024-11-06 08:46:04.857564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.575 [2024-11-06 08:46:04.897683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.575 [2024-11-06 08:46:04.897718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.575 [2024-11-06 08:46:04.897725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.575 [2024-11-06 08:46:04.897731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.575 [2024-11-06 08:46:04.897735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.575 [2024-11-06 08:46:04.898920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.575 [2024-11-06 08:46:04.898920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.575 08:46:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.575 [2024-11-06 08:46:05.063205] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf456e0/0xf49bd0) succeed. 00:09:42.575 [2024-11-06 08:46:05.071986] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf46c30/0xf8b270) succeed. 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.575 [2024-11-06 08:46:05.151299] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.575 NULL1 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.575 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.575 Delay0 00:09:42.576 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.576 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.576 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.576 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.576 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.576 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=312166 00:09:42.576 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:42.576 08:46:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:42.576 [2024-11-06 08:46:05.285026] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:44.520 08:46:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.520 08:46:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.520 08:46:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:45.455 NVMe io qpair process completion error 00:09:45.455 NVMe io qpair process completion error 00:09:45.455 NVMe io qpair process completion error 00:09:45.455 NVMe io qpair process completion error 00:09:45.455 NVMe io qpair process completion error 00:09:45.455 NVMe io qpair process completion error 00:09:45.455 08:46:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.455 08:46:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:45.455 08:46:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 312166 00:09:45.455 08:46:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:46.022 08:46:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:46.022 08:46:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 312166 00:09:46.022 08:46:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Write completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 starting I/O failed: -6 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.591 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 starting I/O failed: -6 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Write completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.592 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Write completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Write completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Write completed with error (sct=0, sc=8) 00:09:46.593 Write completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Write completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Write completed with error (sct=0, sc=8) 00:09:46.593 Read completed with error (sct=0, sc=8) 00:09:46.593 Initializing NVMe Controllers 00:09:46.593 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:46.593 Controller IO queue size 128, less than required. 00:09:46.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:46.593 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:46.593 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:46.593 Initialization complete. Launching workers. 00:09:46.593 ======================================================== 00:09:46.593 Latency(us) 00:09:46.593 Device Information : IOPS MiB/s Average min max 00:09:46.593 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.60 0.04 1593336.43 1000156.07 2967071.57 00:09:46.593 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.60 0.04 1591714.48 1000074.75 2967141.68 00:09:46.593 ======================================================== 00:09:46.593 Total : 161.19 0.08 1592525.46 1000074.75 2967141.68 00:09:46.593 00:09:46.593 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:46.593 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 312166 00:09:46.593 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:46.593 [2024-11-06 08:46:09.395557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:09:46.593 [2024-11-06 08:46:09.395596] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:09:46.593 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:47.159 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:47.159 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 312166 00:09:47.159 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (312166) - No such process 00:09:47.159 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 312166 00:09:47.159 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:47.159 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 312166 00:09:47.159 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 312166 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.160 [2024-11-06 08:46:09.916540] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=313082 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:47.160 08:46:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:47.160 [2024-11-06 08:46:10.030308] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:47.727 08:46:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:47.727 08:46:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:47.727 08:46:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:47.986 08:46:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:47.986 08:46:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:47.986 08:46:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:48.554 08:46:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:48.554 08:46:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:48.554 08:46:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:49.122 08:46:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:49.122 08:46:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:49.122 08:46:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:49.690 08:46:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:49.690 08:46:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:49.690 08:46:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:49.948 08:46:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:49.948 08:46:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:49.948 08:46:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:50.516 08:46:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:50.516 08:46:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:50.516 08:46:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:51.084 08:46:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:51.084 08:46:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:51.084 08:46:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:51.652 08:46:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:51.652 08:46:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:51.652 08:46:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:52.221 08:46:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:52.221 08:46:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:52.221 08:46:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:52.480 08:46:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:52.480 08:46:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:52.480 08:46:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:53.047 08:46:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:53.047 08:46:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:53.047 08:46:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:53.613 08:46:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:53.613 08:46:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:53.613 08:46:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:54.182 08:46:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:54.182 08:46:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:54.182 08:46:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:54.182 Initializing NVMe Controllers 00:09:54.182 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.182 Controller IO queue size 128, less than required. 00:09:54.182 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:54.182 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:54.182 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:54.182 Initialization complete. Launching workers. 00:09:54.182 ======================================================== 00:09:54.182 Latency(us) 00:09:54.182 Device Information : IOPS MiB/s Average min max 00:09:54.182 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001459.36 1000052.89 1004123.35 00:09:54.182 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002588.27 1000232.27 1006261.43 00:09:54.182 ======================================================== 00:09:54.182 Total : 256.00 0.12 1002023.81 1000052.89 1006261.43 00:09:54.182 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 313082 00:09:54.750 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (313082) - No such process 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 313082 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:54.750 rmmod nvme_rdma 00:09:54.750 rmmod nvme_fabrics 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 312140 ']' 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 312140 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 312140 ']' 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 312140 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 312140 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 312140' 00:09:54.750 killing process with pid 312140 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 312140 00:09:54.750 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 312140 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:55.009 00:09:55.009 real 0m19.178s 00:09:55.009 user 0m48.884s 00:09:55.009 sys 0m5.483s 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.009 ************************************ 00:09:55.009 END TEST nvmf_delete_subsystem 00:09:55.009 ************************************ 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.009 ************************************ 00:09:55.009 START TEST nvmf_host_management 00:09:55.009 ************************************ 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:55.009 * Looking for test storage... 00:09:55.009 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # lcov --version 00:09:55.009 08:46:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.269 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.270 --rc genhtml_branch_coverage=1 00:09:55.270 --rc genhtml_function_coverage=1 00:09:55.270 --rc genhtml_legend=1 00:09:55.270 --rc geninfo_all_blocks=1 00:09:55.270 --rc geninfo_unexecuted_blocks=1 00:09:55.270 00:09:55.270 ' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.270 --rc genhtml_branch_coverage=1 00:09:55.270 --rc genhtml_function_coverage=1 00:09:55.270 --rc genhtml_legend=1 00:09:55.270 --rc geninfo_all_blocks=1 00:09:55.270 --rc geninfo_unexecuted_blocks=1 00:09:55.270 00:09:55.270 ' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.270 --rc genhtml_branch_coverage=1 00:09:55.270 --rc genhtml_function_coverage=1 00:09:55.270 --rc genhtml_legend=1 00:09:55.270 --rc geninfo_all_blocks=1 00:09:55.270 --rc geninfo_unexecuted_blocks=1 00:09:55.270 00:09:55.270 ' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.270 --rc genhtml_branch_coverage=1 00:09:55.270 --rc genhtml_function_coverage=1 00:09:55.270 --rc genhtml_legend=1 00:09:55.270 --rc geninfo_all_blocks=1 00:09:55.270 --rc geninfo_unexecuted_blocks=1 00:09:55.270 00:09:55.270 ' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.270 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.270 08:46:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:01.847 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:01.848 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:01.848 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:01.848 Found net devices under 0000:da:00.0: mlx_0_0 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:01.848 Found net devices under 0000:da:00.1: mlx_0_1 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # rdma_device_init 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.848 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:01.849 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:01.849 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:01.849 altname enp218s0f0np0 00:10:01.849 altname ens818f0np0 00:10:01.849 inet 192.168.100.8/24 scope global mlx_0_0 00:10:01.849 valid_lft forever preferred_lft forever 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:01.849 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:01.849 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:01.849 altname enp218s0f1np1 00:10:01.849 altname ens818f1np1 00:10:01.849 inet 192.168.100.9/24 scope global mlx_0_1 00:10:01.849 valid_lft forever preferred_lft forever 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:01.849 192.168.100.9' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:01.849 192.168.100.9' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # head -n 1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:01.849 192.168.100.9' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # tail -n +2 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # head -n 1 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.849 08:46:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=317548 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 317548 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 317548 ']' 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:01.849 [2024-11-06 08:46:24.052104] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:01.849 [2024-11-06 08:46:24.052156] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.849 [2024-11-06 08:46:24.126004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.849 [2024-11-06 08:46:24.167272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.849 [2024-11-06 08:46:24.167308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.849 [2024-11-06 08:46:24.167315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.849 [2024-11-06 08:46:24.167321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.849 [2024-11-06 08:46:24.167328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.849 [2024-11-06 08:46:24.168919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.849 [2024-11-06 08:46:24.169025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.849 [2024-11-06 08:46:24.169131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.849 [2024-11-06 08:46:24.169133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.849 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:01.850 [2024-11-06 08:46:24.335112] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b890a0/0x1b8d590) succeed. 00:10:01.850 [2024-11-06 08:46:24.344160] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b8a730/0x1bcec30) succeed. 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:01.850 Malloc0 00:10:01.850 [2024-11-06 08:46:24.537043] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=317595 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 317595 /var/tmp/bdevperf.sock 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 317595 ']' 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:01.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:01.850 { 00:10:01.850 "params": { 00:10:01.850 "name": "Nvme$subsystem", 00:10:01.850 "trtype": "$TEST_TRANSPORT", 00:10:01.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.850 "adrfam": "ipv4", 00:10:01.850 "trsvcid": "$NVMF_PORT", 00:10:01.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.850 "hdgst": ${hdgst:-false}, 00:10:01.850 "ddgst": ${ddgst:-false} 00:10:01.850 }, 00:10:01.850 "method": "bdev_nvme_attach_controller" 00:10:01.850 } 00:10:01.850 EOF 00:10:01.850 )") 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:01.850 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:01.850 "params": { 00:10:01.850 "name": "Nvme0", 00:10:01.850 "trtype": "rdma", 00:10:01.850 "traddr": "192.168.100.8", 00:10:01.850 "adrfam": "ipv4", 00:10:01.850 "trsvcid": "4420", 00:10:01.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:01.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:01.850 "hdgst": false, 00:10:01.850 "ddgst": false 00:10:01.850 }, 00:10:01.850 "method": "bdev_nvme_attach_controller" 00:10:01.850 }' 00:10:01.850 [2024-11-06 08:46:24.629257] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:01.850 [2024-11-06 08:46:24.629300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317595 ] 00:10:01.850 [2024-11-06 08:46:24.702340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.850 [2024-11-06 08:46:24.743563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.109 Running I/O for 10 seconds... 00:10:02.109 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.109 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:02.109 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:02.109 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.109 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:02.109 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.109 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.110 08:46:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=171 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 171 -ge 100 ']' 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.110 08:46:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:03.048 296.00 IOPS, 18.50 MiB/s [2024-11-06T07:46:26.062Z] [2024-11-06 08:46:26.058495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2fa00 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f980 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f900 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff880 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef800 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.048 [2024-11-06 08:46:26.058770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf780 len:0x10000 key:0x182900 00:10:03.048 [2024-11-06 08:46:26.058777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf700 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf680 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf600 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f580 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8f500 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7f480 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6f400 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5f380 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4f300 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3f280 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2f200 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1f180 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0f100 len:0x10000 key:0x182900 00:10:03.049 [2024-11-06 08:46:26.058971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000df0000 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.058986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.058993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff80 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.059000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcff00 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.059014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfe80 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.059028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafe00 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.059042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fd80 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.059056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fd00 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.059070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fc80 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.059084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6fc00 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.059100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5fb80 len:0x10000 key:0x182a00 00:10:03.049 [2024-11-06 08:46:26.059115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088b4000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088d5000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c0e000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bed000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bcc000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bab000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b8a000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3e7000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3c6000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3a5000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a384000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a363000 len:0x10000 key:0x182400 00:10:03.049 [2024-11-06 08:46:26.059308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.049 [2024-11-06 08:46:26.059316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a342000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a321000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a300000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6ff000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6de000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6bd000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a69c000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a67b000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a65a000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a639000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.050 [2024-11-06 08:46:26.059464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a618000 len:0x10000 key:0x182400 00:10:03.050 [2024-11-06 08:46:26.059471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3c9e1000 sqhd:7210 p:0 m:0 dnr:0 00:10:03.309 [2024-11-06 08:46:26.062177] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:03.309 task offset: 40960 on job bdev=Nvme0n1 fails 00:10:03.309 00:10:03.309 Latency(us) 00:10:03.309 [2024-11-06T07:46:26.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.309 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:03.309 Job: Nvme0n1 ended in about 1.13 seconds with error 00:10:03.309 Verification LBA range: start 0x0 length 0x400 00:10:03.309 Nvme0n1 : 1.13 262.78 16.42 56.82 0.00 198682.01 2449.80 1014622.11 00:10:03.309 [2024-11-06T07:46:26.323Z] =================================================================================================================== 00:10:03.309 [2024-11-06T07:46:26.323Z] Total : 262.78 16.42 56.82 0.00 198682.01 2449.80 1014622.11 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 317595 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:03.309 { 00:10:03.309 "params": { 00:10:03.309 "name": "Nvme$subsystem", 00:10:03.309 "trtype": "$TEST_TRANSPORT", 00:10:03.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.309 "adrfam": "ipv4", 00:10:03.309 "trsvcid": "$NVMF_PORT", 00:10:03.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.309 "hdgst": ${hdgst:-false}, 00:10:03.309 "ddgst": ${ddgst:-false} 00:10:03.309 }, 00:10:03.309 "method": "bdev_nvme_attach_controller" 00:10:03.309 } 00:10:03.309 EOF 00:10:03.309 )") 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:03.309 08:46:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:03.309 "params": { 00:10:03.309 "name": "Nvme0", 00:10:03.309 "trtype": "rdma", 00:10:03.309 "traddr": "192.168.100.8", 00:10:03.309 "adrfam": "ipv4", 00:10:03.309 "trsvcid": "4420", 00:10:03.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:03.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:03.309 "hdgst": false, 00:10:03.309 "ddgst": false 00:10:03.309 }, 00:10:03.309 "method": "bdev_nvme_attach_controller" 00:10:03.309 }' 00:10:03.309 [2024-11-06 08:46:26.110896] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:03.309 [2024-11-06 08:46:26.110937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317840 ] 00:10:03.309 [2024-11-06 08:46:26.187122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.309 [2024-11-06 08:46:26.228033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.570 Running I/O for 1 seconds... 00:10:04.508 2985.00 IOPS, 186.56 MiB/s 00:10:04.508 Latency(us) 00:10:04.508 [2024-11-06T07:46:27.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.508 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:04.508 Verification LBA range: start 0x0 length 0x400 00:10:04.508 Nvme0n1 : 1.01 3007.86 187.99 0.00 0.00 20843.31 628.05 40694.74 00:10:04.508 [2024-11-06T07:46:27.522Z] =================================================================================================================== 00:10:04.508 [2024-11-06T07:46:27.522Z] Total : 3007.86 187.99 0.00 0.00 20843.31 628.05 40694.74 00:10:04.768 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 317595 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:04.768 rmmod nvme_rdma 00:10:04.768 rmmod nvme_fabrics 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 317548 ']' 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 317548 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 317548 ']' 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 317548 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 317548 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 317548' 00:10:04.768 killing process with pid 317548 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 317548 00:10:04.768 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 317548 00:10:05.027 [2024-11-06 08:46:27.949716] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:05.027 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:05.027 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:05.027 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:05.027 00:10:05.027 real 0m10.087s 00:10:05.027 user 0m19.703s 00:10:05.027 sys 0m5.242s 00:10:05.027 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.027 08:46:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.027 ************************************ 00:10:05.027 END TEST nvmf_host_management 00:10:05.027 ************************************ 00:10:05.027 08:46:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:05.027 08:46:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.027 08:46:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.027 08:46:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.287 ************************************ 00:10:05.287 START TEST nvmf_lvol 00:10:05.287 ************************************ 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:05.287 * Looking for test storage... 00:10:05.287 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # lcov --version 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:05.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.287 --rc genhtml_branch_coverage=1 00:10:05.287 --rc genhtml_function_coverage=1 00:10:05.287 --rc genhtml_legend=1 00:10:05.287 --rc geninfo_all_blocks=1 00:10:05.287 --rc geninfo_unexecuted_blocks=1 00:10:05.287 00:10:05.287 ' 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:05.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.287 --rc genhtml_branch_coverage=1 00:10:05.287 --rc genhtml_function_coverage=1 00:10:05.287 --rc genhtml_legend=1 00:10:05.287 --rc geninfo_all_blocks=1 00:10:05.287 --rc geninfo_unexecuted_blocks=1 00:10:05.287 00:10:05.287 ' 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:05.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.287 --rc genhtml_branch_coverage=1 00:10:05.287 --rc genhtml_function_coverage=1 00:10:05.287 --rc genhtml_legend=1 00:10:05.287 --rc geninfo_all_blocks=1 00:10:05.287 --rc geninfo_unexecuted_blocks=1 00:10:05.287 00:10:05.287 ' 00:10:05.287 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:05.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.287 --rc genhtml_branch_coverage=1 00:10:05.288 --rc genhtml_function_coverage=1 00:10:05.288 --rc genhtml_legend=1 00:10:05.288 --rc geninfo_all_blocks=1 00:10:05.288 --rc geninfo_unexecuted_blocks=1 00:10:05.288 00:10:05.288 ' 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.288 08:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:11.861 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:11.861 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:11.861 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:11.862 Found net devices under 0000:da:00.0: mlx_0_0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:11.862 Found net devices under 0000:da:00.1: mlx_0_1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # rdma_device_init 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:11.862 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.862 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:11.862 altname enp218s0f0np0 00:10:11.862 altname ens818f0np0 00:10:11.862 inet 192.168.100.8/24 scope global mlx_0_0 00:10:11.862 valid_lft forever preferred_lft forever 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:11.862 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.862 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:11.862 altname enp218s0f1np1 00:10:11.862 altname ens818f1np1 00:10:11.862 inet 192.168.100.9/24 scope global mlx_0_1 00:10:11.862 valid_lft forever preferred_lft forever 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:11.862 08:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:11.862 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:11.862 192.168.100.9' 00:10:11.862 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:11.863 192.168.100.9' 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # head -n 1 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:11.863 192.168.100.9' 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # tail -n +2 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # head -n 1 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=321373 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 321373 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 321373 ']' 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:11.863 [2024-11-06 08:46:34.095053] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:11.863 [2024-11-06 08:46:34.095101] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.863 [2024-11-06 08:46:34.169620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.863 [2024-11-06 08:46:34.209151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.863 [2024-11-06 08:46:34.209189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.863 [2024-11-06 08:46:34.209197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.863 [2024-11-06 08:46:34.209207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.863 [2024-11-06 08:46:34.209213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.863 [2024-11-06 08:46:34.210506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.863 [2024-11-06 08:46:34.210613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.863 [2024-11-06 08:46:34.210615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:11.863 [2024-11-06 08:46:34.536569] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xec2230/0xec6720) succeed. 00:10:11.863 [2024-11-06 08:46:34.545371] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xec3820/0xf07dc0) succeed. 00:10:11.863 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.124 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:12.124 08:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.124 08:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:12.124 08:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:12.385 08:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:12.644 08:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=01bf241d-a446-4fe8-a093-5e4dfb21a14c 00:10:12.644 08:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 01bf241d-a446-4fe8-a093-5e4dfb21a14c lvol 20 00:10:12.902 08:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=481c082e-0256-4c66-bbca-395dcccd7a64 00:10:12.902 08:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:12.902 08:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 481c082e-0256-4c66-bbca-395dcccd7a64 00:10:13.160 08:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:13.418 [2024-11-06 08:46:36.304472] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:13.418 08:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:13.676 08:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=321866 00:10:13.676 08:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:13.676 08:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:14.612 08:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 481c082e-0256-4c66-bbca-395dcccd7a64 MY_SNAPSHOT 00:10:14.870 08:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=af193855-4bbf-4399-8e7c-216875edba61 00:10:14.870 08:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 481c082e-0256-4c66-bbca-395dcccd7a64 30 00:10:15.128 08:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone af193855-4bbf-4399-8e7c-216875edba61 MY_CLONE 00:10:15.128 08:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=476d5bad-f3ad-4d8b-87cd-89a33d8db2a1 00:10:15.128 08:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 476d5bad-f3ad-4d8b-87cd-89a33d8db2a1 00:10:15.386 08:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 321866 00:10:25.362 Initializing NVMe Controllers 00:10:25.362 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:25.362 Controller IO queue size 128, less than required. 00:10:25.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:25.362 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:25.362 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:25.362 Initialization complete. Launching workers. 00:10:25.362 ======================================================== 00:10:25.362 Latency(us) 00:10:25.362 Device Information : IOPS MiB/s Average min max 00:10:25.362 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16104.32 62.91 7949.96 2467.20 52776.54 00:10:25.362 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16217.62 63.35 7893.55 3222.03 49574.57 00:10:25.362 ======================================================== 00:10:25.362 Total : 32321.95 126.26 7921.66 2467.20 52776.54 00:10:25.362 00:10:25.362 08:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:25.362 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 481c082e-0256-4c66-bbca-395dcccd7a64 00:10:25.362 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 01bf241d-a446-4fe8-a093-5e4dfb21a14c 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:25.621 rmmod nvme_rdma 00:10:25.621 rmmod nvme_fabrics 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 321373 ']' 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 321373 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 321373 ']' 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 321373 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.621 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321373 00:10:25.880 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.880 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.880 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321373' 00:10:25.880 killing process with pid 321373 00:10:25.880 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 321373 00:10:25.880 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 321373 00:10:26.138 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:26.138 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:26.138 00:10:26.138 real 0m20.875s 00:10:26.139 user 1m10.559s 00:10:26.139 sys 0m5.536s 00:10:26.139 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.139 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:26.139 ************************************ 00:10:26.139 END TEST nvmf_lvol 00:10:26.139 ************************************ 00:10:26.139 08:46:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:26.139 08:46:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:26.139 08:46:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.139 08:46:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.139 ************************************ 00:10:26.139 START TEST nvmf_lvs_grow 00:10:26.139 ************************************ 00:10:26.139 08:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:26.139 * Looking for test storage... 00:10:26.139 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lcov --version 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.139 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.399 --rc genhtml_branch_coverage=1 00:10:26.399 --rc genhtml_function_coverage=1 00:10:26.399 --rc genhtml_legend=1 00:10:26.399 --rc geninfo_all_blocks=1 00:10:26.399 --rc geninfo_unexecuted_blocks=1 00:10:26.399 00:10:26.399 ' 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.399 --rc genhtml_branch_coverage=1 00:10:26.399 --rc genhtml_function_coverage=1 00:10:26.399 --rc genhtml_legend=1 00:10:26.399 --rc geninfo_all_blocks=1 00:10:26.399 --rc geninfo_unexecuted_blocks=1 00:10:26.399 00:10:26.399 ' 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.399 --rc genhtml_branch_coverage=1 00:10:26.399 --rc genhtml_function_coverage=1 00:10:26.399 --rc genhtml_legend=1 00:10:26.399 --rc geninfo_all_blocks=1 00:10:26.399 --rc geninfo_unexecuted_blocks=1 00:10:26.399 00:10:26.399 ' 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.399 --rc genhtml_branch_coverage=1 00:10:26.399 --rc genhtml_function_coverage=1 00:10:26.399 --rc genhtml_legend=1 00:10:26.399 --rc geninfo_all_blocks=1 00:10:26.399 --rc geninfo_unexecuted_blocks=1 00:10:26.399 00:10:26.399 ' 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.399 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.400 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:26.400 08:46:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:32.974 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:32.975 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:32.975 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:32.975 Found net devices under 0000:da:00.0: mlx_0_0 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:32.975 Found net devices under 0000:da:00.1: mlx_0_1 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # rdma_device_init 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:32.975 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:32.975 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:32.975 altname enp218s0f0np0 00:10:32.975 altname ens818f0np0 00:10:32.975 inet 192.168.100.8/24 scope global mlx_0_0 00:10:32.975 valid_lft forever preferred_lft forever 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:32.975 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:32.976 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:32.976 08:46:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:32.976 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:32.976 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:32.976 altname enp218s0f1np1 00:10:32.976 altname ens818f1np1 00:10:32.976 inet 192.168.100.9/24 scope global mlx_0_1 00:10:32.976 valid_lft forever preferred_lft forever 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:32.976 192.168.100.9' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:32.976 192.168.100.9' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # head -n 1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:32.976 192.168.100.9' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # tail -n +2 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # head -n 1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=327018 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 327018 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 327018 ']' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.976 [2024-11-06 08:46:55.159393] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:32.976 [2024-11-06 08:46:55.159444] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.976 [2024-11-06 08:46:55.236730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.976 [2024-11-06 08:46:55.279071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.976 [2024-11-06 08:46:55.279102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.976 [2024-11-06 08:46:55.279109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.976 [2024-11-06 08:46:55.279115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.976 [2024-11-06 08:46:55.279120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.976 [2024-11-06 08:46:55.279640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:32.976 [2024-11-06 08:46:55.642271] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10dfb40/0x10e4030) succeed. 00:10:32.976 [2024-11-06 08:46:55.653002] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10e0ff0/0x11256d0) succeed. 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.976 ************************************ 00:10:32.976 START TEST lvs_grow_clean 00:10:32.976 ************************************ 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:32.976 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:32.977 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:32.977 08:46:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:33.236 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:33.236 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:33.236 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:33.495 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:33.495 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:33.495 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 lvol 150 00:10:33.754 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3 00:10:33.754 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:33.754 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:33.754 [2024-11-06 08:46:56.716226] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:33.754 [2024-11-06 08:46:56.716279] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:33.754 true 00:10:33.754 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:33.754 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:34.013 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:34.013 08:46:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:34.272 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3 00:10:34.272 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:34.531 [2024-11-06 08:46:57.438614] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:34.531 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=327517 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 327517 /var/tmp/bdevperf.sock 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 327517 ']' 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:34.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.791 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:34.791 [2024-11-06 08:46:57.683485] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:34.791 [2024-11-06 08:46:57.683533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327517 ] 00:10:34.791 [2024-11-06 08:46:57.757376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.791 [2024-11-06 08:46:57.798995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.051 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.051 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:35.051 08:46:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:35.310 Nvme0n1 00:10:35.310 08:46:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:35.568 [ 00:10:35.568 { 00:10:35.568 "name": "Nvme0n1", 00:10:35.569 "aliases": [ 00:10:35.569 "0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3" 00:10:35.569 ], 00:10:35.569 "product_name": "NVMe disk", 00:10:35.569 "block_size": 4096, 00:10:35.569 "num_blocks": 38912, 00:10:35.569 "uuid": "0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3", 00:10:35.569 "numa_id": 1, 00:10:35.569 "assigned_rate_limits": { 00:10:35.569 "rw_ios_per_sec": 0, 00:10:35.569 "rw_mbytes_per_sec": 0, 00:10:35.569 "r_mbytes_per_sec": 0, 00:10:35.569 "w_mbytes_per_sec": 0 00:10:35.569 }, 00:10:35.569 "claimed": false, 00:10:35.569 "zoned": false, 00:10:35.569 "supported_io_types": { 00:10:35.569 "read": true, 00:10:35.569 "write": true, 00:10:35.569 "unmap": true, 00:10:35.569 "flush": true, 00:10:35.569 "reset": true, 00:10:35.569 "nvme_admin": true, 00:10:35.569 "nvme_io": true, 00:10:35.569 "nvme_io_md": false, 00:10:35.569 "write_zeroes": true, 00:10:35.569 "zcopy": false, 00:10:35.569 "get_zone_info": false, 00:10:35.569 "zone_management": false, 00:10:35.569 "zone_append": false, 00:10:35.569 "compare": true, 00:10:35.569 "compare_and_write": true, 00:10:35.569 "abort": true, 00:10:35.569 "seek_hole": false, 00:10:35.569 "seek_data": false, 00:10:35.569 "copy": true, 00:10:35.569 "nvme_iov_md": false 00:10:35.569 }, 00:10:35.569 "memory_domains": [ 00:10:35.569 { 00:10:35.569 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:35.569 "dma_device_type": 0 00:10:35.569 } 00:10:35.569 ], 00:10:35.569 "driver_specific": { 00:10:35.569 "nvme": [ 00:10:35.569 { 00:10:35.569 "trid": { 00:10:35.569 "trtype": "RDMA", 00:10:35.569 "adrfam": "IPv4", 00:10:35.569 "traddr": "192.168.100.8", 00:10:35.569 "trsvcid": "4420", 00:10:35.569 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:35.569 }, 00:10:35.569 "ctrlr_data": { 00:10:35.569 "cntlid": 1, 00:10:35.569 "vendor_id": "0x8086", 00:10:35.569 "model_number": "SPDK bdev Controller", 00:10:35.569 "serial_number": "SPDK0", 00:10:35.569 "firmware_revision": "25.01", 00:10:35.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:35.569 "oacs": { 00:10:35.569 "security": 0, 00:10:35.569 "format": 0, 00:10:35.569 "firmware": 0, 00:10:35.569 "ns_manage": 0 00:10:35.569 }, 00:10:35.569 "multi_ctrlr": true, 00:10:35.569 "ana_reporting": false 00:10:35.569 }, 00:10:35.569 "vs": { 00:10:35.569 "nvme_version": "1.3" 00:10:35.569 }, 00:10:35.569 "ns_data": { 00:10:35.569 "id": 1, 00:10:35.569 "can_share": true 00:10:35.569 } 00:10:35.569 } 00:10:35.569 ], 00:10:35.569 "mp_policy": "active_passive" 00:10:35.569 } 00:10:35.569 } 00:10:35.569 ] 00:10:35.569 08:46:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=327721 00:10:35.569 08:46:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:35.569 08:46:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:35.569 Running I/O for 10 seconds... 00:10:36.506 Latency(us) 00:10:36.506 [2024-11-06T07:46:59.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.506 Nvme0n1 : 1.00 34178.00 133.51 0.00 0.00 0.00 0.00 0.00 00:10:36.506 [2024-11-06T07:46:59.520Z] =================================================================================================================== 00:10:36.506 [2024-11-06T07:46:59.520Z] Total : 34178.00 133.51 0.00 0.00 0.00 0.00 0.00 00:10:36.506 00:10:37.445 08:47:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:37.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.446 Nvme0n1 : 2.00 34288.50 133.94 0.00 0.00 0.00 0.00 0.00 00:10:37.446 [2024-11-06T07:47:00.460Z] =================================================================================================================== 00:10:37.446 [2024-11-06T07:47:00.460Z] Total : 34288.50 133.94 0.00 0.00 0.00 0.00 0.00 00:10:37.446 00:10:37.704 true 00:10:37.704 08:47:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:37.704 08:47:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:37.964 08:47:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:37.964 08:47:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:37.964 08:47:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 327721 00:10:38.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.532 Nvme0n1 : 3.00 34443.00 134.54 0.00 0.00 0.00 0.00 0.00 00:10:38.532 [2024-11-06T07:47:01.546Z] =================================================================================================================== 00:10:38.532 [2024-11-06T07:47:01.546Z] Total : 34443.00 134.54 0.00 0.00 0.00 0.00 0.00 00:10:38.532 00:10:39.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.468 Nvme0n1 : 4.00 34615.75 135.22 0.00 0.00 0.00 0.00 0.00 00:10:39.468 [2024-11-06T07:47:02.482Z] =================================================================================================================== 00:10:39.468 [2024-11-06T07:47:02.482Z] Total : 34615.75 135.22 0.00 0.00 0.00 0.00 0.00 00:10:39.468 00:10:40.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.847 Nvme0n1 : 5.00 34731.40 135.67 0.00 0.00 0.00 0.00 0.00 00:10:40.847 [2024-11-06T07:47:03.861Z] =================================================================================================================== 00:10:40.847 [2024-11-06T07:47:03.861Z] Total : 34731.40 135.67 0.00 0.00 0.00 0.00 0.00 00:10:40.847 00:10:41.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.784 Nvme0n1 : 6.00 34804.17 135.95 0.00 0.00 0.00 0.00 0.00 00:10:41.784 [2024-11-06T07:47:04.798Z] =================================================================================================================== 00:10:41.784 [2024-11-06T07:47:04.798Z] Total : 34804.17 135.95 0.00 0.00 0.00 0.00 0.00 00:10:41.784 00:10:42.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.722 Nvme0n1 : 7.00 34865.00 136.19 0.00 0.00 0.00 0.00 0.00 00:10:42.722 [2024-11-06T07:47:05.736Z] =================================================================================================================== 00:10:42.722 [2024-11-06T07:47:05.736Z] Total : 34865.00 136.19 0.00 0.00 0.00 0.00 0.00 00:10:42.722 00:10:43.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.660 Nvme0n1 : 8.00 34913.00 136.38 0.00 0.00 0.00 0.00 0.00 00:10:43.660 [2024-11-06T07:47:06.674Z] =================================================================================================================== 00:10:43.660 [2024-11-06T07:47:06.674Z] Total : 34913.00 136.38 0.00 0.00 0.00 0.00 0.00 00:10:43.660 00:10:44.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.597 Nvme0n1 : 9.00 34951.89 136.53 0.00 0.00 0.00 0.00 0.00 00:10:44.597 [2024-11-06T07:47:07.611Z] =================================================================================================================== 00:10:44.597 [2024-11-06T07:47:07.611Z] Total : 34951.89 136.53 0.00 0.00 0.00 0.00 0.00 00:10:44.597 00:10:45.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.534 Nvme0n1 : 10.00 34976.60 136.63 0.00 0.00 0.00 0.00 0.00 00:10:45.534 [2024-11-06T07:47:08.548Z] =================================================================================================================== 00:10:45.534 [2024-11-06T07:47:08.548Z] Total : 34976.60 136.63 0.00 0.00 0.00 0.00 0.00 00:10:45.534 00:10:45.534 00:10:45.534 Latency(us) 00:10:45.534 [2024-11-06T07:47:08.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.534 Nvme0n1 : 10.00 34977.32 136.63 0.00 0.00 3656.47 2730.67 17601.10 00:10:45.534 [2024-11-06T07:47:08.548Z] =================================================================================================================== 00:10:45.534 [2024-11-06T07:47:08.548Z] Total : 34977.32 136.63 0.00 0.00 3656.47 2730.67 17601.10 00:10:45.534 { 00:10:45.534 "results": [ 00:10:45.534 { 00:10:45.534 "job": "Nvme0n1", 00:10:45.534 "core_mask": "0x2", 00:10:45.534 "workload": "randwrite", 00:10:45.534 "status": "finished", 00:10:45.534 "queue_depth": 128, 00:10:45.534 "io_size": 4096, 00:10:45.534 "runtime": 10.003025, 00:10:45.534 "iops": 34977.31936089333, 00:10:45.534 "mibps": 136.63015375348957, 00:10:45.534 "io_failed": 0, 00:10:45.534 "io_timeout": 0, 00:10:45.534 "avg_latency_us": 3656.46730743785, 00:10:45.534 "min_latency_us": 2730.6666666666665, 00:10:45.534 "max_latency_us": 17601.097142857143 00:10:45.534 } 00:10:45.534 ], 00:10:45.534 "core_count": 1 00:10:45.534 } 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 327517 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 327517 ']' 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 327517 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 327517 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 327517' 00:10:45.534 killing process with pid 327517 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 327517 00:10:45.534 Received shutdown signal, test time was about 10.000000 seconds 00:10:45.534 00:10:45.534 Latency(us) 00:10:45.534 [2024-11-06T07:47:08.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.534 [2024-11-06T07:47:08.548Z] =================================================================================================================== 00:10:45.534 [2024-11-06T07:47:08.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:45.534 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 327517 00:10:45.793 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:46.053 08:47:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:46.312 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:46.312 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:46.312 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:46.312 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:46.312 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:46.571 [2024-11-06 08:47:09.459140] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:46.571 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:46.836 request: 00:10:46.836 { 00:10:46.836 "uuid": "86d17b1c-3d5d-44de-9cf5-f1be46c593c3", 00:10:46.836 "method": "bdev_lvol_get_lvstores", 00:10:46.836 "req_id": 1 00:10:46.836 } 00:10:46.836 Got JSON-RPC error response 00:10:46.836 response: 00:10:46.836 { 00:10:46.836 "code": -19, 00:10:46.836 "message": "No such device" 00:10:46.836 } 00:10:46.836 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:46.836 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:46.836 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:46.836 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:46.836 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:47.095 aio_bdev 00:10:47.095 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3 00:10:47.095 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3 00:10:47.095 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.095 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:47.095 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.095 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.095 08:47:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:47.095 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3 -t 2000 00:10:47.354 [ 00:10:47.354 { 00:10:47.354 "name": "0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3", 00:10:47.354 "aliases": [ 00:10:47.354 "lvs/lvol" 00:10:47.354 ], 00:10:47.354 "product_name": "Logical Volume", 00:10:47.354 "block_size": 4096, 00:10:47.354 "num_blocks": 38912, 00:10:47.354 "uuid": "0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3", 00:10:47.354 "assigned_rate_limits": { 00:10:47.354 "rw_ios_per_sec": 0, 00:10:47.354 "rw_mbytes_per_sec": 0, 00:10:47.354 "r_mbytes_per_sec": 0, 00:10:47.354 "w_mbytes_per_sec": 0 00:10:47.354 }, 00:10:47.354 "claimed": false, 00:10:47.354 "zoned": false, 00:10:47.354 "supported_io_types": { 00:10:47.354 "read": true, 00:10:47.354 "write": true, 00:10:47.354 "unmap": true, 00:10:47.354 "flush": false, 00:10:47.354 "reset": true, 00:10:47.354 "nvme_admin": false, 00:10:47.354 "nvme_io": false, 00:10:47.354 "nvme_io_md": false, 00:10:47.354 "write_zeroes": true, 00:10:47.354 "zcopy": false, 00:10:47.354 "get_zone_info": false, 00:10:47.354 "zone_management": false, 00:10:47.354 "zone_append": false, 00:10:47.354 "compare": false, 00:10:47.354 "compare_and_write": false, 00:10:47.354 "abort": false, 00:10:47.354 "seek_hole": true, 00:10:47.354 "seek_data": true, 00:10:47.354 "copy": false, 00:10:47.354 "nvme_iov_md": false 00:10:47.354 }, 00:10:47.354 "driver_specific": { 00:10:47.354 "lvol": { 00:10:47.354 "lvol_store_uuid": "86d17b1c-3d5d-44de-9cf5-f1be46c593c3", 00:10:47.354 "base_bdev": "aio_bdev", 00:10:47.354 "thin_provision": false, 00:10:47.354 "num_allocated_clusters": 38, 00:10:47.354 "snapshot": false, 00:10:47.354 "clone": false, 00:10:47.354 "esnap_clone": false 00:10:47.354 } 00:10:47.354 } 00:10:47.354 } 00:10:47.354 ] 00:10:47.354 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:47.354 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:47.354 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:47.612 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:47.612 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:47.612 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:47.872 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:47.872 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a53f2e7-c721-4dc6-a9bc-2e128b58a2d3 00:10:47.872 08:47:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86d17b1c-3d5d-44de-9cf5-f1be46c593c3 00:10:48.130 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:48.389 00:10:48.389 real 0m15.521s 00:10:48.389 user 0m15.527s 00:10:48.389 sys 0m0.991s 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:48.389 ************************************ 00:10:48.389 END TEST lvs_grow_clean 00:10:48.389 ************************************ 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:48.389 ************************************ 00:10:48.389 START TEST lvs_grow_dirty 00:10:48.389 ************************************ 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:48.389 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:48.648 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:48.648 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:48.907 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=538e813d-df18-4a41-80ad-b1329d1e4834 00:10:48.907 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:10:48.907 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:49.166 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:49.166 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:49.166 08:47:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 538e813d-df18-4a41-80ad-b1329d1e4834 lvol 150 00:10:49.166 08:47:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b2314379-95b0-49a2-9980-4230f8dc3ef5 00:10:49.166 08:47:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:49.166 08:47:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:49.425 [2024-11-06 08:47:12.302214] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:49.425 [2024-11-06 08:47:12.302268] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:49.425 true 00:10:49.425 08:47:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:10:49.425 08:47:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:49.684 08:47:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:49.684 08:47:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:49.684 08:47:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b2314379-95b0-49a2-9980-4230f8dc3ef5 00:10:49.942 08:47:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:50.201 [2024-11-06 08:47:13.012477] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:50.201 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=330123 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 330123 /var/tmp/bdevperf.sock 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 330123 ']' 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:50.460 [2024-11-06 08:47:13.270603] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:50.460 [2024-11-06 08:47:13.270649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330123 ] 00:10:50.460 [2024-11-06 08:47:13.343589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.460 [2024-11-06 08:47:13.384679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.460 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:50.720 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:50.979 Nvme0n1 00:10:50.979 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:50.979 [ 00:10:50.979 { 00:10:50.979 "name": "Nvme0n1", 00:10:50.979 "aliases": [ 00:10:50.979 "b2314379-95b0-49a2-9980-4230f8dc3ef5" 00:10:50.979 ], 00:10:50.979 "product_name": "NVMe disk", 00:10:50.979 "block_size": 4096, 00:10:50.979 "num_blocks": 38912, 00:10:50.979 "uuid": "b2314379-95b0-49a2-9980-4230f8dc3ef5", 00:10:50.979 "numa_id": 1, 00:10:50.979 "assigned_rate_limits": { 00:10:50.979 "rw_ios_per_sec": 0, 00:10:50.979 "rw_mbytes_per_sec": 0, 00:10:50.979 "r_mbytes_per_sec": 0, 00:10:50.979 "w_mbytes_per_sec": 0 00:10:50.979 }, 00:10:50.979 "claimed": false, 00:10:50.979 "zoned": false, 00:10:50.979 "supported_io_types": { 00:10:50.979 "read": true, 00:10:50.979 "write": true, 00:10:50.979 "unmap": true, 00:10:50.979 "flush": true, 00:10:50.979 "reset": true, 00:10:50.979 "nvme_admin": true, 00:10:50.979 "nvme_io": true, 00:10:50.979 "nvme_io_md": false, 00:10:50.979 "write_zeroes": true, 00:10:50.979 "zcopy": false, 00:10:50.979 "get_zone_info": false, 00:10:50.979 "zone_management": false, 00:10:50.979 "zone_append": false, 00:10:50.979 "compare": true, 00:10:50.979 "compare_and_write": true, 00:10:50.979 "abort": true, 00:10:50.979 "seek_hole": false, 00:10:50.979 "seek_data": false, 00:10:50.979 "copy": true, 00:10:50.979 "nvme_iov_md": false 00:10:50.979 }, 00:10:50.979 "memory_domains": [ 00:10:50.979 { 00:10:50.979 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:50.979 "dma_device_type": 0 00:10:50.979 } 00:10:50.979 ], 00:10:50.979 "driver_specific": { 00:10:50.979 "nvme": [ 00:10:50.979 { 00:10:50.979 "trid": { 00:10:50.979 "trtype": "RDMA", 00:10:50.979 "adrfam": "IPv4", 00:10:50.979 "traddr": "192.168.100.8", 00:10:50.979 "trsvcid": "4420", 00:10:50.979 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:50.979 }, 00:10:50.979 "ctrlr_data": { 00:10:50.979 "cntlid": 1, 00:10:50.979 "vendor_id": "0x8086", 00:10:50.979 "model_number": "SPDK bdev Controller", 00:10:50.979 "serial_number": "SPDK0", 00:10:50.979 "firmware_revision": "25.01", 00:10:50.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:50.979 "oacs": { 00:10:50.979 "security": 0, 00:10:50.979 "format": 0, 00:10:50.979 "firmware": 0, 00:10:50.979 "ns_manage": 0 00:10:50.979 }, 00:10:50.979 "multi_ctrlr": true, 00:10:50.979 "ana_reporting": false 00:10:50.979 }, 00:10:50.979 "vs": { 00:10:50.979 "nvme_version": "1.3" 00:10:50.979 }, 00:10:50.979 "ns_data": { 00:10:50.979 "id": 1, 00:10:50.979 "can_share": true 00:10:50.979 } 00:10:50.979 } 00:10:50.979 ], 00:10:50.979 "mp_policy": "active_passive" 00:10:50.979 } 00:10:50.979 } 00:10:50.979 ] 00:10:50.979 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=330351 00:10:50.979 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:50.979 08:47:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:51.239 Running I/O for 10 seconds... 00:10:52.176 Latency(us) 00:10:52.176 [2024-11-06T07:47:15.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.176 Nvme0n1 : 1.00 34400.00 134.38 0.00 0.00 0.00 0.00 0.00 00:10:52.176 [2024-11-06T07:47:15.190Z] =================================================================================================================== 00:10:52.176 [2024-11-06T07:47:15.190Z] Total : 34400.00 134.38 0.00 0.00 0.00 0.00 0.00 00:10:52.176 00:10:53.113 08:47:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:10:53.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.113 Nvme0n1 : 2.00 34672.50 135.44 0.00 0.00 0.00 0.00 0.00 00:10:53.113 [2024-11-06T07:47:16.127Z] =================================================================================================================== 00:10:53.113 [2024-11-06T07:47:16.127Z] Total : 34672.50 135.44 0.00 0.00 0.00 0.00 0.00 00:10:53.113 00:10:53.372 true 00:10:53.372 08:47:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:53.372 08:47:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:10:53.372 08:47:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:53.372 08:47:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:53.372 08:47:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 330351 00:10:54.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.310 Nvme0n1 : 3.00 34743.33 135.72 0.00 0.00 0.00 0.00 0.00 00:10:54.310 [2024-11-06T07:47:17.324Z] =================================================================================================================== 00:10:54.310 [2024-11-06T07:47:17.324Z] Total : 34743.33 135.72 0.00 0.00 0.00 0.00 0.00 00:10:54.310 00:10:55.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.248 Nvme0n1 : 4.00 34840.00 136.09 0.00 0.00 0.00 0.00 0.00 00:10:55.248 [2024-11-06T07:47:18.262Z] =================================================================================================================== 00:10:55.248 [2024-11-06T07:47:18.262Z] Total : 34840.00 136.09 0.00 0.00 0.00 0.00 0.00 00:10:55.248 00:10:56.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.262 Nvme0n1 : 5.00 34874.40 136.23 0.00 0.00 0.00 0.00 0.00 00:10:56.262 [2024-11-06T07:47:19.276Z] =================================================================================================================== 00:10:56.262 [2024-11-06T07:47:19.276Z] Total : 34874.40 136.23 0.00 0.00 0.00 0.00 0.00 00:10:56.262 00:10:57.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.264 Nvme0n1 : 6.00 34906.50 136.35 0.00 0.00 0.00 0.00 0.00 00:10:57.264 [2024-11-06T07:47:20.278Z] =================================================================================================================== 00:10:57.264 [2024-11-06T07:47:20.278Z] Total : 34906.50 136.35 0.00 0.00 0.00 0.00 0.00 00:10:57.264 00:10:58.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.303 Nvme0n1 : 7.00 34921.57 136.41 0.00 0.00 0.00 0.00 0.00 00:10:58.303 [2024-11-06T07:47:21.317Z] =================================================================================================================== 00:10:58.303 [2024-11-06T07:47:21.317Z] Total : 34921.57 136.41 0.00 0.00 0.00 0.00 0.00 00:10:58.303 00:10:59.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.241 Nvme0n1 : 8.00 34956.00 136.55 0.00 0.00 0.00 0.00 0.00 00:10:59.241 [2024-11-06T07:47:22.255Z] =================================================================================================================== 00:10:59.241 [2024-11-06T07:47:22.255Z] Total : 34956.00 136.55 0.00 0.00 0.00 0.00 0.00 00:10:59.241 00:11:00.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.178 Nvme0n1 : 9.00 34990.11 136.68 0.00 0.00 0.00 0.00 0.00 00:11:00.178 [2024-11-06T07:47:23.192Z] =================================================================================================================== 00:11:00.178 [2024-11-06T07:47:23.192Z] Total : 34990.11 136.68 0.00 0.00 0.00 0.00 0.00 00:11:00.178 00:11:01.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.115 Nvme0n1 : 10.00 35017.40 136.79 0.00 0.00 0.00 0.00 0.00 00:11:01.115 [2024-11-06T07:47:24.129Z] =================================================================================================================== 00:11:01.115 [2024-11-06T07:47:24.129Z] Total : 35017.40 136.79 0.00 0.00 0.00 0.00 0.00 00:11:01.115 00:11:01.115 00:11:01.115 Latency(us) 00:11:01.115 [2024-11-06T07:47:24.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.115 Nvme0n1 : 10.00 35016.37 136.78 0.00 0.00 3652.36 2543.42 9549.53 00:11:01.115 [2024-11-06T07:47:24.129Z] =================================================================================================================== 00:11:01.115 [2024-11-06T07:47:24.129Z] Total : 35016.37 136.78 0.00 0.00 3652.36 2543.42 9549.53 00:11:01.115 { 00:11:01.115 "results": [ 00:11:01.115 { 00:11:01.115 "job": "Nvme0n1", 00:11:01.115 "core_mask": "0x2", 00:11:01.115 "workload": "randwrite", 00:11:01.115 "status": "finished", 00:11:01.115 "queue_depth": 128, 00:11:01.115 "io_size": 4096, 00:11:01.115 "runtime": 10.003122, 00:11:01.115 "iops": 35016.36788994476, 00:11:01.115 "mibps": 136.78268707009673, 00:11:01.115 "io_failed": 0, 00:11:01.115 "io_timeout": 0, 00:11:01.115 "avg_latency_us": 3652.357652285639, 00:11:01.115 "min_latency_us": 2543.4209523809523, 00:11:01.115 "max_latency_us": 9549.531428571428 00:11:01.115 } 00:11:01.115 ], 00:11:01.115 "core_count": 1 00:11:01.115 } 00:11:01.115 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 330123 00:11:01.115 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 330123 ']' 00:11:01.115 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 330123 00:11:01.115 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:11:01.115 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.115 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 330123 00:11:01.374 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:01.374 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:01.374 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 330123' 00:11:01.374 killing process with pid 330123 00:11:01.374 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 330123 00:11:01.374 Received shutdown signal, test time was about 10.000000 seconds 00:11:01.374 00:11:01.374 Latency(us) 00:11:01.374 [2024-11-06T07:47:24.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.374 [2024-11-06T07:47:24.388Z] =================================================================================================================== 00:11:01.374 [2024-11-06T07:47:24.388Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:01.374 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 330123 00:11:01.374 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:01.633 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:01.892 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:01.892 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:11:02.152 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:02.152 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:02.152 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 327018 00:11:02.152 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 327018 00:11:02.152 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 327018 Killed "${NVMF_APP[@]}" "$@" 00:11:02.152 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:02.152 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:02.152 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:02.152 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:02.152 08:47:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:02.152 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=332216 00:11:02.152 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 332216 00:11:02.152 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:02.152 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 332216 ']' 00:11:02.152 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.152 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.152 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.152 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.152 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:02.152 [2024-11-06 08:47:25.052820] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:02.152 [2024-11-06 08:47:25.052865] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.152 [2024-11-06 08:47:25.129720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.412 [2024-11-06 08:47:25.170133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.412 [2024-11-06 08:47:25.170167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.412 [2024-11-06 08:47:25.170173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.412 [2024-11-06 08:47:25.170179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.412 [2024-11-06 08:47:25.170185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.412 [2024-11-06 08:47:25.170741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.412 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.412 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:02.412 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:02.412 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.412 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:02.412 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.412 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:02.671 [2024-11-06 08:47:25.465014] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:02.671 [2024-11-06 08:47:25.465089] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:02.671 [2024-11-06 08:47:25.465114] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:02.671 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:02.671 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b2314379-95b0-49a2-9980-4230f8dc3ef5 00:11:02.671 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b2314379-95b0-49a2-9980-4230f8dc3ef5 00:11:02.671 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.671 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:02.671 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.671 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.671 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:02.931 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b2314379-95b0-49a2-9980-4230f8dc3ef5 -t 2000 00:11:02.931 [ 00:11:02.931 { 00:11:02.931 "name": "b2314379-95b0-49a2-9980-4230f8dc3ef5", 00:11:02.931 "aliases": [ 00:11:02.931 "lvs/lvol" 00:11:02.931 ], 00:11:02.931 "product_name": "Logical Volume", 00:11:02.931 "block_size": 4096, 00:11:02.931 "num_blocks": 38912, 00:11:02.931 "uuid": "b2314379-95b0-49a2-9980-4230f8dc3ef5", 00:11:02.931 "assigned_rate_limits": { 00:11:02.931 "rw_ios_per_sec": 0, 00:11:02.931 "rw_mbytes_per_sec": 0, 00:11:02.931 "r_mbytes_per_sec": 0, 00:11:02.931 "w_mbytes_per_sec": 0 00:11:02.931 }, 00:11:02.931 "claimed": false, 00:11:02.931 "zoned": false, 00:11:02.931 "supported_io_types": { 00:11:02.931 "read": true, 00:11:02.931 "write": true, 00:11:02.931 "unmap": true, 00:11:02.931 "flush": false, 00:11:02.931 "reset": true, 00:11:02.931 "nvme_admin": false, 00:11:02.931 "nvme_io": false, 00:11:02.931 "nvme_io_md": false, 00:11:02.931 "write_zeroes": true, 00:11:02.931 "zcopy": false, 00:11:02.931 "get_zone_info": false, 00:11:02.931 "zone_management": false, 00:11:02.931 "zone_append": false, 00:11:02.931 "compare": false, 00:11:02.931 "compare_and_write": false, 00:11:02.931 "abort": false, 00:11:02.931 "seek_hole": true, 00:11:02.931 "seek_data": true, 00:11:02.931 "copy": false, 00:11:02.931 "nvme_iov_md": false 00:11:02.931 }, 00:11:02.931 "driver_specific": { 00:11:02.931 "lvol": { 00:11:02.931 "lvol_store_uuid": "538e813d-df18-4a41-80ad-b1329d1e4834", 00:11:02.931 "base_bdev": "aio_bdev", 00:11:02.931 "thin_provision": false, 00:11:02.931 "num_allocated_clusters": 38, 00:11:02.931 "snapshot": false, 00:11:02.931 "clone": false, 00:11:02.931 "esnap_clone": false 00:11:02.931 } 00:11:02.931 } 00:11:02.931 } 00:11:02.931 ] 00:11:02.931 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:02.931 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:11:02.931 08:47:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:03.191 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:03.191 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:11:03.191 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:03.451 [2024-11-06 08:47:26.401944] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:03.451 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:11:03.711 request: 00:11:03.711 { 00:11:03.711 "uuid": "538e813d-df18-4a41-80ad-b1329d1e4834", 00:11:03.711 "method": "bdev_lvol_get_lvstores", 00:11:03.711 "req_id": 1 00:11:03.711 } 00:11:03.711 Got JSON-RPC error response 00:11:03.711 response: 00:11:03.711 { 00:11:03.711 "code": -19, 00:11:03.711 "message": "No such device" 00:11:03.711 } 00:11:03.711 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:03.711 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:03.711 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:03.711 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:03.711 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:03.970 aio_bdev 00:11:03.970 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b2314379-95b0-49a2-9980-4230f8dc3ef5 00:11:03.970 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b2314379-95b0-49a2-9980-4230f8dc3ef5 00:11:03.970 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.970 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:03.970 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.970 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.970 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:03.970 08:47:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b2314379-95b0-49a2-9980-4230f8dc3ef5 -t 2000 00:11:04.230 [ 00:11:04.230 { 00:11:04.230 "name": "b2314379-95b0-49a2-9980-4230f8dc3ef5", 00:11:04.230 "aliases": [ 00:11:04.230 "lvs/lvol" 00:11:04.230 ], 00:11:04.230 "product_name": "Logical Volume", 00:11:04.230 "block_size": 4096, 00:11:04.230 "num_blocks": 38912, 00:11:04.230 "uuid": "b2314379-95b0-49a2-9980-4230f8dc3ef5", 00:11:04.230 "assigned_rate_limits": { 00:11:04.230 "rw_ios_per_sec": 0, 00:11:04.230 "rw_mbytes_per_sec": 0, 00:11:04.230 "r_mbytes_per_sec": 0, 00:11:04.230 "w_mbytes_per_sec": 0 00:11:04.230 }, 00:11:04.230 "claimed": false, 00:11:04.230 "zoned": false, 00:11:04.230 "supported_io_types": { 00:11:04.230 "read": true, 00:11:04.230 "write": true, 00:11:04.230 "unmap": true, 00:11:04.230 "flush": false, 00:11:04.230 "reset": true, 00:11:04.230 "nvme_admin": false, 00:11:04.230 "nvme_io": false, 00:11:04.230 "nvme_io_md": false, 00:11:04.230 "write_zeroes": true, 00:11:04.230 "zcopy": false, 00:11:04.230 "get_zone_info": false, 00:11:04.230 "zone_management": false, 00:11:04.230 "zone_append": false, 00:11:04.230 "compare": false, 00:11:04.230 "compare_and_write": false, 00:11:04.230 "abort": false, 00:11:04.230 "seek_hole": true, 00:11:04.230 "seek_data": true, 00:11:04.230 "copy": false, 00:11:04.230 "nvme_iov_md": false 00:11:04.230 }, 00:11:04.230 "driver_specific": { 00:11:04.230 "lvol": { 00:11:04.230 "lvol_store_uuid": "538e813d-df18-4a41-80ad-b1329d1e4834", 00:11:04.230 "base_bdev": "aio_bdev", 00:11:04.230 "thin_provision": false, 00:11:04.230 "num_allocated_clusters": 38, 00:11:04.230 "snapshot": false, 00:11:04.230 "clone": false, 00:11:04.230 "esnap_clone": false 00:11:04.230 } 00:11:04.230 } 00:11:04.230 } 00:11:04.230 ] 00:11:04.230 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:04.230 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:11:04.230 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:04.489 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:04.489 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:11:04.489 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:04.748 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:04.748 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b2314379-95b0-49a2-9980-4230f8dc3ef5 00:11:04.748 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 538e813d-df18-4a41-80ad-b1329d1e4834 00:11:05.007 08:47:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:05.266 00:11:05.266 real 0m16.774s 00:11:05.266 user 0m44.618s 00:11:05.266 sys 0m2.677s 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:05.266 ************************************ 00:11:05.266 END TEST lvs_grow_dirty 00:11:05.266 ************************************ 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:05.266 nvmf_trace.0 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:05.266 rmmod nvme_rdma 00:11:05.266 rmmod nvme_fabrics 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 332216 ']' 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 332216 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 332216 ']' 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 332216 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:05.266 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332216 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332216' 00:11:05.526 killing process with pid 332216 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 332216 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 332216 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:05.526 00:11:05.526 real 0m39.441s 00:11:05.526 user 1m5.699s 00:11:05.526 sys 0m8.501s 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:05.526 ************************************ 00:11:05.526 END TEST nvmf_lvs_grow 00:11:05.526 ************************************ 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.526 ************************************ 00:11:05.526 START TEST nvmf_bdev_io_wait 00:11:05.526 ************************************ 00:11:05.526 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:05.786 * Looking for test storage... 00:11:05.786 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lcov --version 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:05.786 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.787 --rc genhtml_branch_coverage=1 00:11:05.787 --rc genhtml_function_coverage=1 00:11:05.787 --rc genhtml_legend=1 00:11:05.787 --rc geninfo_all_blocks=1 00:11:05.787 --rc geninfo_unexecuted_blocks=1 00:11:05.787 00:11:05.787 ' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.787 --rc genhtml_branch_coverage=1 00:11:05.787 --rc genhtml_function_coverage=1 00:11:05.787 --rc genhtml_legend=1 00:11:05.787 --rc geninfo_all_blocks=1 00:11:05.787 --rc geninfo_unexecuted_blocks=1 00:11:05.787 00:11:05.787 ' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.787 --rc genhtml_branch_coverage=1 00:11:05.787 --rc genhtml_function_coverage=1 00:11:05.787 --rc genhtml_legend=1 00:11:05.787 --rc geninfo_all_blocks=1 00:11:05.787 --rc geninfo_unexecuted_blocks=1 00:11:05.787 00:11:05.787 ' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.787 --rc genhtml_branch_coverage=1 00:11:05.787 --rc genhtml_function_coverage=1 00:11:05.787 --rc genhtml_legend=1 00:11:05.787 --rc geninfo_all_blocks=1 00:11:05.787 --rc geninfo_unexecuted_blocks=1 00:11:05.787 00:11:05.787 ' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.787 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.787 08:47:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:12.364 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:12.364 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:12.364 Found net devices under 0000:da:00.0: mlx_0_0 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:12.364 Found net devices under 0000:da:00.1: mlx_0_1 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # rdma_device_init 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.364 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:12.365 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:12.365 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:12.365 altname enp218s0f0np0 00:11:12.365 altname ens818f0np0 00:11:12.365 inet 192.168.100.8/24 scope global mlx_0_0 00:11:12.365 valid_lft forever preferred_lft forever 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:12.365 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:12.365 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:12.365 altname enp218s0f1np1 00:11:12.365 altname ens818f1np1 00:11:12.365 inet 192.168.100.9/24 scope global mlx_0_1 00:11:12.365 valid_lft forever preferred_lft forever 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:12.365 192.168.100.9' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:12.365 192.168.100.9' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # head -n 1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:12.365 192.168.100.9' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # tail -n +2 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # head -n 1 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=336032 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 336032 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 336032 ']' 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.365 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 [2024-11-06 08:47:34.620870] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:12.366 [2024-11-06 08:47:34.620920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.366 [2024-11-06 08:47:34.681472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.366 [2024-11-06 08:47:34.727561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.366 [2024-11-06 08:47:34.727597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.366 [2024-11-06 08:47:34.727605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.366 [2024-11-06 08:47:34.727611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.366 [2024-11-06 08:47:34.727616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.366 [2024-11-06 08:47:34.729165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.366 [2024-11-06 08:47:34.729212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.366 [2024-11-06 08:47:34.729318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.366 [2024-11-06 08:47:34.729319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.366 08:47:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 [2024-11-06 08:47:34.928557] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe14e30/0xe19320) succeed. 00:11:12.366 [2024-11-06 08:47:34.937219] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe164c0/0xe5a9c0) succeed. 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 Malloc0 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.366 [2024-11-06 08:47:35.121806] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=336067 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=336069 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:12.366 { 00:11:12.366 "params": { 00:11:12.366 "name": "Nvme$subsystem", 00:11:12.366 "trtype": "$TEST_TRANSPORT", 00:11:12.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:12.366 "adrfam": "ipv4", 00:11:12.366 "trsvcid": "$NVMF_PORT", 00:11:12.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:12.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:12.366 "hdgst": ${hdgst:-false}, 00:11:12.366 "ddgst": ${ddgst:-false} 00:11:12.366 }, 00:11:12.366 "method": "bdev_nvme_attach_controller" 00:11:12.366 } 00:11:12.366 EOF 00:11:12.366 )") 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=336071 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:12.366 { 00:11:12.366 "params": { 00:11:12.366 "name": "Nvme$subsystem", 00:11:12.366 "trtype": "$TEST_TRANSPORT", 00:11:12.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:12.366 "adrfam": "ipv4", 00:11:12.366 "trsvcid": "$NVMF_PORT", 00:11:12.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:12.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:12.366 "hdgst": ${hdgst:-false}, 00:11:12.366 "ddgst": ${ddgst:-false} 00:11:12.366 }, 00:11:12.366 "method": "bdev_nvme_attach_controller" 00:11:12.366 } 00:11:12.366 EOF 00:11:12.366 )") 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=336074 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:12.366 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:12.367 { 00:11:12.367 "params": { 00:11:12.367 "name": "Nvme$subsystem", 00:11:12.367 "trtype": "$TEST_TRANSPORT", 00:11:12.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:12.367 "adrfam": "ipv4", 00:11:12.367 "trsvcid": "$NVMF_PORT", 00:11:12.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:12.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:12.367 "hdgst": ${hdgst:-false}, 00:11:12.367 "ddgst": ${ddgst:-false} 00:11:12.367 }, 00:11:12.367 "method": "bdev_nvme_attach_controller" 00:11:12.367 } 00:11:12.367 EOF 00:11:12.367 )") 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:12.367 { 00:11:12.367 "params": { 00:11:12.367 "name": "Nvme$subsystem", 00:11:12.367 "trtype": "$TEST_TRANSPORT", 00:11:12.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:12.367 "adrfam": "ipv4", 00:11:12.367 "trsvcid": "$NVMF_PORT", 00:11:12.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:12.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:12.367 "hdgst": ${hdgst:-false}, 00:11:12.367 "ddgst": ${ddgst:-false} 00:11:12.367 }, 00:11:12.367 "method": "bdev_nvme_attach_controller" 00:11:12.367 } 00:11:12.367 EOF 00:11:12.367 )") 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 336067 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:12.367 "params": { 00:11:12.367 "name": "Nvme1", 00:11:12.367 "trtype": "rdma", 00:11:12.367 "traddr": "192.168.100.8", 00:11:12.367 "adrfam": "ipv4", 00:11:12.367 "trsvcid": "4420", 00:11:12.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:12.367 "hdgst": false, 00:11:12.367 "ddgst": false 00:11:12.367 }, 00:11:12.367 "method": "bdev_nvme_attach_controller" 00:11:12.367 }' 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:12.367 "params": { 00:11:12.367 "name": "Nvme1", 00:11:12.367 "trtype": "rdma", 00:11:12.367 "traddr": "192.168.100.8", 00:11:12.367 "adrfam": "ipv4", 00:11:12.367 "trsvcid": "4420", 00:11:12.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:12.367 "hdgst": false, 00:11:12.367 "ddgst": false 00:11:12.367 }, 00:11:12.367 "method": "bdev_nvme_attach_controller" 00:11:12.367 }' 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:12.367 "params": { 00:11:12.367 "name": "Nvme1", 00:11:12.367 "trtype": "rdma", 00:11:12.367 "traddr": "192.168.100.8", 00:11:12.367 "adrfam": "ipv4", 00:11:12.367 "trsvcid": "4420", 00:11:12.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:12.367 "hdgst": false, 00:11:12.367 "ddgst": false 00:11:12.367 }, 00:11:12.367 "method": "bdev_nvme_attach_controller" 00:11:12.367 }' 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:12.367 08:47:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:12.367 "params": { 00:11:12.367 "name": "Nvme1", 00:11:12.367 "trtype": "rdma", 00:11:12.367 "traddr": "192.168.100.8", 00:11:12.367 "adrfam": "ipv4", 00:11:12.367 "trsvcid": "4420", 00:11:12.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:12.367 "hdgst": false, 00:11:12.367 "ddgst": false 00:11:12.367 }, 00:11:12.367 "method": "bdev_nvme_attach_controller" 00:11:12.367 }' 00:11:12.367 [2024-11-06 08:47:35.172569] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:12.367 [2024-11-06 08:47:35.172618] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:12.367 [2024-11-06 08:47:35.172943] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:12.367 [2024-11-06 08:47:35.172943] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:12.367 [2024-11-06 08:47:35.172988] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 08:47:35.172988] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:12.367 --proc-type=auto ] 00:11:12.367 [2024-11-06 08:47:35.173279] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:12.367 [2024-11-06 08:47:35.173320] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:12.367 [2024-11-06 08:47:35.368956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.627 [2024-11-06 08:47:35.411877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:12.627 [2024-11-06 08:47:35.468058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.627 [2024-11-06 08:47:35.513712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:12.627 [2024-11-06 08:47:35.531118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.627 [2024-11-06 08:47:35.565925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:12.627 [2024-11-06 08:47:35.630120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.886 [2024-11-06 08:47:35.682432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:12.886 Running I/O for 1 seconds... 00:11:12.886 Running I/O for 1 seconds... 00:11:12.886 Running I/O for 1 seconds... 00:11:12.886 Running I/O for 1 seconds... 00:11:13.823 17152.00 IOPS, 67.00 MiB/s 00:11:13.824 Latency(us) 00:11:13.824 [2024-11-06T07:47:36.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.824 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:13.824 Nvme1n1 : 1.01 17190.54 67.15 0.00 0.00 7422.59 4213.03 13856.18 00:11:13.824 [2024-11-06T07:47:36.838Z] =================================================================================================================== 00:11:13.824 [2024-11-06T07:47:36.838Z] Total : 17190.54 67.15 0.00 0.00 7422.59 4213.03 13856.18 00:11:13.824 252072.00 IOPS, 984.66 MiB/s [2024-11-06T07:47:36.838Z] 17366.00 IOPS, 67.84 MiB/s 00:11:13.824 Latency(us) 00:11:13.824 [2024-11-06T07:47:36.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.824 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:13.824 Nvme1n1 : 1.00 251692.14 983.17 0.00 0.00 505.45 231.13 2059.70 00:11:13.824 [2024-11-06T07:47:36.838Z] =================================================================================================================== 00:11:13.824 [2024-11-06T07:47:36.838Z] Total : 251692.14 983.17 0.00 0.00 505.45 231.13 2059.70 00:11:13.824 00:11:13.824 Latency(us) 00:11:13.824 [2024-11-06T07:47:36.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.824 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:13.824 Nvme1n1 : 1.01 17425.60 68.07 0.00 0.00 7326.09 3838.54 15042.07 00:11:13.824 [2024-11-06T07:47:36.838Z] =================================================================================================================== 00:11:13.824 [2024-11-06T07:47:36.838Z] Total : 17425.60 68.07 0.00 0.00 7326.09 3838.54 15042.07 00:11:13.824 14179.00 IOPS, 55.39 MiB/s 00:11:13.824 Latency(us) 00:11:13.824 [2024-11-06T07:47:36.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.824 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:13.824 Nvme1n1 : 1.01 14269.28 55.74 0.00 0.00 8947.53 3401.63 19223.89 00:11:13.824 [2024-11-06T07:47:36.838Z] =================================================================================================================== 00:11:13.824 [2024-11-06T07:47:36.838Z] Total : 14269.28 55.74 0.00 0.00 8947.53 3401.63 19223.89 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 336069 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 336071 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 336074 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.083 08:47:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:14.083 rmmod nvme_rdma 00:11:14.083 rmmod nvme_fabrics 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 336032 ']' 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 336032 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 336032 ']' 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 336032 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 336032 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 336032' 00:11:14.083 killing process with pid 336032 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 336032 00:11:14.083 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 336032 00:11:14.342 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:14.342 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:14.342 00:11:14.342 real 0m8.824s 00:11:14.342 user 0m17.313s 00:11:14.342 sys 0m5.630s 00:11:14.342 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.342 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.342 ************************************ 00:11:14.342 END TEST nvmf_bdev_io_wait 00:11:14.342 ************************************ 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.602 ************************************ 00:11:14.602 START TEST nvmf_queue_depth 00:11:14.602 ************************************ 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:14.602 * Looking for test storage... 00:11:14.602 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lcov --version 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.602 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:14.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.603 --rc genhtml_branch_coverage=1 00:11:14.603 --rc genhtml_function_coverage=1 00:11:14.603 --rc genhtml_legend=1 00:11:14.603 --rc geninfo_all_blocks=1 00:11:14.603 --rc geninfo_unexecuted_blocks=1 00:11:14.603 00:11:14.603 ' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:14.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.603 --rc genhtml_branch_coverage=1 00:11:14.603 --rc genhtml_function_coverage=1 00:11:14.603 --rc genhtml_legend=1 00:11:14.603 --rc geninfo_all_blocks=1 00:11:14.603 --rc geninfo_unexecuted_blocks=1 00:11:14.603 00:11:14.603 ' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:14.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.603 --rc genhtml_branch_coverage=1 00:11:14.603 --rc genhtml_function_coverage=1 00:11:14.603 --rc genhtml_legend=1 00:11:14.603 --rc geninfo_all_blocks=1 00:11:14.603 --rc geninfo_unexecuted_blocks=1 00:11:14.603 00:11:14.603 ' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:14.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.603 --rc genhtml_branch_coverage=1 00:11:14.603 --rc genhtml_function_coverage=1 00:11:14.603 --rc genhtml_legend=1 00:11:14.603 --rc geninfo_all_blocks=1 00:11:14.603 --rc geninfo_unexecuted_blocks=1 00:11:14.603 00:11:14.603 ' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.603 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:14.603 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:14.604 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.604 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.604 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.604 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:14.604 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:14.604 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.604 08:47:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.180 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:21.181 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:21.181 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:21.181 Found net devices under 0000:da:00.0: mlx_0_0 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:21.181 Found net devices under 0000:da:00.1: mlx_0_1 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # rdma_device_init 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:21.181 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.181 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:21.181 altname enp218s0f0np0 00:11:21.181 altname ens818f0np0 00:11:21.181 inet 192.168.100.8/24 scope global mlx_0_0 00:11:21.181 valid_lft forever preferred_lft forever 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:21.181 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.181 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:21.181 altname enp218s0f1np1 00:11:21.181 altname ens818f1np1 00:11:21.181 inet 192.168.100.9/24 scope global mlx_0_1 00:11:21.181 valid_lft forever preferred_lft forever 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:21.181 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:21.182 192.168.100.9' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:21.182 192.168.100.9' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # head -n 1 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:21.182 192.168.100.9' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # tail -n +2 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # head -n 1 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=339617 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 339617 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 339617 ']' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 [2024-11-06 08:47:43.534675] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:21.182 [2024-11-06 08:47:43.534729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.182 [2024-11-06 08:47:43.613834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.182 [2024-11-06 08:47:43.653072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.182 [2024-11-06 08:47:43.653104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.182 [2024-11-06 08:47:43.653111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.182 [2024-11-06 08:47:43.653117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.182 [2024-11-06 08:47:43.653122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.182 [2024-11-06 08:47:43.653688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 [2024-11-06 08:47:43.815482] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf29ea0/0xf2e390) succeed. 00:11:21.182 [2024-11-06 08:47:43.825924] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf2b350/0xf6fa30) succeed. 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 Malloc0 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 [2024-11-06 08:47:43.915651] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=339641 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 339641 /var/tmp/bdevperf.sock 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 339641 ']' 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:21.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.182 08:47:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.182 [2024-11-06 08:47:43.965195] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:21.182 [2024-11-06 08:47:43.965255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339641 ] 00:11:21.182 [2024-11-06 08:47:44.040827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.182 [2024-11-06 08:47:44.083813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.183 08:47:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.183 08:47:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:21.183 08:47:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:21.183 08:47:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.183 08:47:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.442 NVMe0n1 00:11:21.442 08:47:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.442 08:47:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:21.442 Running I/O for 10 seconds... 00:11:23.754 16821.00 IOPS, 65.71 MiB/s [2024-11-06T07:47:47.706Z] 17246.00 IOPS, 67.37 MiB/s [2024-11-06T07:47:48.643Z] 17408.00 IOPS, 68.00 MiB/s [2024-11-06T07:47:49.580Z] 17408.00 IOPS, 68.00 MiB/s [2024-11-06T07:47:50.526Z] 17408.00 IOPS, 68.00 MiB/s [2024-11-06T07:47:51.463Z] 17408.00 IOPS, 68.00 MiB/s [2024-11-06T07:47:52.400Z] 17449.00 IOPS, 68.16 MiB/s [2024-11-06T07:47:53.780Z] 17477.75 IOPS, 68.27 MiB/s [2024-11-06T07:47:54.719Z] 17499.89 IOPS, 68.36 MiB/s [2024-11-06T07:47:54.719Z] 17510.40 IOPS, 68.40 MiB/s 00:11:31.705 Latency(us) 00:11:31.705 [2024-11-06T07:47:54.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.705 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:31.705 Verification LBA range: start 0x0 length 0x4000 00:11:31.705 NVMe0n1 : 10.04 17515.04 68.42 0.00 0.00 58297.24 18350.08 36949.82 00:11:31.705 [2024-11-06T07:47:54.719Z] =================================================================================================================== 00:11:31.705 [2024-11-06T07:47:54.719Z] Total : 17515.04 68.42 0.00 0.00 58297.24 18350.08 36949.82 00:11:31.705 { 00:11:31.705 "results": [ 00:11:31.705 { 00:11:31.705 "job": "NVMe0n1", 00:11:31.705 "core_mask": "0x1", 00:11:31.705 "workload": "verify", 00:11:31.705 "status": "finished", 00:11:31.705 "verify_range": { 00:11:31.705 "start": 0, 00:11:31.705 "length": 16384 00:11:31.705 }, 00:11:31.705 "queue_depth": 1024, 00:11:31.705 "io_size": 4096, 00:11:31.705 "runtime": 10.044912, 00:11:31.705 "iops": 17515.03646821396, 00:11:31.705 "mibps": 68.41811120396078, 00:11:31.705 "io_failed": 0, 00:11:31.705 "io_timeout": 0, 00:11:31.705 "avg_latency_us": 58297.24432186088, 00:11:31.705 "min_latency_us": 18350.08, 00:11:31.705 "max_latency_us": 36949.82095238095 00:11:31.705 } 00:11:31.705 ], 00:11:31.705 "core_count": 1 00:11:31.705 } 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 339641 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 339641 ']' 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 339641 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 339641 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 339641' 00:11:31.705 killing process with pid 339641 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 339641 00:11:31.705 Received shutdown signal, test time was about 10.000000 seconds 00:11:31.705 00:11:31.705 Latency(us) 00:11:31.705 [2024-11-06T07:47:54.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.705 [2024-11-06T07:47:54.719Z] =================================================================================================================== 00:11:31.705 [2024-11-06T07:47:54.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 339641 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.705 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:31.705 rmmod nvme_rdma 00:11:31.705 rmmod nvme_fabrics 00:11:31.964 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.964 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:31.964 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:31.964 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 339617 ']' 00:11:31.964 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 339617 00:11:31.964 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 339617 ']' 00:11:31.964 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 339617 00:11:31.964 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:31.965 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.965 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 339617 00:11:31.965 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:31.965 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:31.965 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 339617' 00:11:31.965 killing process with pid 339617 00:11:31.965 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 339617 00:11:31.965 08:47:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 339617 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:32.224 00:11:32.224 real 0m17.618s 00:11:32.224 user 0m24.110s 00:11:32.224 sys 0m4.971s 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:32.224 ************************************ 00:11:32.224 END TEST nvmf_queue_depth 00:11:32.224 ************************************ 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:32.224 ************************************ 00:11:32.224 START TEST nvmf_target_multipath 00:11:32.224 ************************************ 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:32.224 * Looking for test storage... 00:11:32.224 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lcov --version 00:11:32.224 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:32.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.484 --rc genhtml_branch_coverage=1 00:11:32.484 --rc genhtml_function_coverage=1 00:11:32.484 --rc genhtml_legend=1 00:11:32.484 --rc geninfo_all_blocks=1 00:11:32.484 --rc geninfo_unexecuted_blocks=1 00:11:32.484 00:11:32.484 ' 00:11:32.484 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:32.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.484 --rc genhtml_branch_coverage=1 00:11:32.485 --rc genhtml_function_coverage=1 00:11:32.485 --rc genhtml_legend=1 00:11:32.485 --rc geninfo_all_blocks=1 00:11:32.485 --rc geninfo_unexecuted_blocks=1 00:11:32.485 00:11:32.485 ' 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:32.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.485 --rc genhtml_branch_coverage=1 00:11:32.485 --rc genhtml_function_coverage=1 00:11:32.485 --rc genhtml_legend=1 00:11:32.485 --rc geninfo_all_blocks=1 00:11:32.485 --rc geninfo_unexecuted_blocks=1 00:11:32.485 00:11:32.485 ' 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:32.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.485 --rc genhtml_branch_coverage=1 00:11:32.485 --rc genhtml_function_coverage=1 00:11:32.485 --rc genhtml_legend=1 00:11:32.485 --rc geninfo_all_blocks=1 00:11:32.485 --rc geninfo_unexecuted_blocks=1 00:11:32.485 00:11:32.485 ' 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.485 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.485 08:47:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:39.059 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:39.059 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:39.059 Found net devices under 0000:da:00.0: mlx_0_0 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:39.059 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:39.060 Found net devices under 0000:da:00.1: mlx_0_1 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # rdma_device_init 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:39.060 08:48:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:39.060 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:39.060 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:39.060 altname enp218s0f0np0 00:11:39.060 altname ens818f0np0 00:11:39.060 inet 192.168.100.8/24 scope global mlx_0_0 00:11:39.060 valid_lft forever preferred_lft forever 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:39.060 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:39.060 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:39.060 altname enp218s0f1np1 00:11:39.060 altname ens818f1np1 00:11:39.060 inet 192.168.100.9/24 scope global mlx_0_1 00:11:39.060 valid_lft forever preferred_lft forever 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:39.060 192.168.100.9' 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:39.060 192.168.100.9' 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # head -n 1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:39.060 192.168.100.9' 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # tail -n +2 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # head -n 1 00:11:39.060 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:11:39.061 run this test only with TCP transport for now 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:39.061 rmmod nvme_rdma 00:11:39.061 rmmod nvme_fabrics 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:39.061 00:11:39.061 real 0m6.171s 00:11:39.061 user 0m1.797s 00:11:39.061 sys 0m4.511s 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:39.061 ************************************ 00:11:39.061 END TEST nvmf_target_multipath 00:11:39.061 ************************************ 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:39.061 ************************************ 00:11:39.061 START TEST nvmf_zcopy 00:11:39.061 ************************************ 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:39.061 * Looking for test storage... 00:11:39.061 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lcov --version 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:39.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.061 --rc genhtml_branch_coverage=1 00:11:39.061 --rc genhtml_function_coverage=1 00:11:39.061 --rc genhtml_legend=1 00:11:39.061 --rc geninfo_all_blocks=1 00:11:39.061 --rc geninfo_unexecuted_blocks=1 00:11:39.061 00:11:39.061 ' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:39.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.061 --rc genhtml_branch_coverage=1 00:11:39.061 --rc genhtml_function_coverage=1 00:11:39.061 --rc genhtml_legend=1 00:11:39.061 --rc geninfo_all_blocks=1 00:11:39.061 --rc geninfo_unexecuted_blocks=1 00:11:39.061 00:11:39.061 ' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:39.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.061 --rc genhtml_branch_coverage=1 00:11:39.061 --rc genhtml_function_coverage=1 00:11:39.061 --rc genhtml_legend=1 00:11:39.061 --rc geninfo_all_blocks=1 00:11:39.061 --rc geninfo_unexecuted_blocks=1 00:11:39.061 00:11:39.061 ' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:39.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.061 --rc genhtml_branch_coverage=1 00:11:39.061 --rc genhtml_function_coverage=1 00:11:39.061 --rc genhtml_legend=1 00:11:39.061 --rc geninfo_all_blocks=1 00:11:39.061 --rc geninfo_unexecuted_blocks=1 00:11:39.061 00:11:39.061 ' 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.061 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.062 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:39.062 08:48:01 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:44.339 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:44.339 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:44.339 Found net devices under 0000:da:00.0: mlx_0_0 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:44.339 Found net devices under 0000:da:00.1: mlx_0_1 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # rdma_device_init 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:44.339 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:44.340 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:44.340 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:44.340 altname enp218s0f0np0 00:11:44.340 altname ens818f0np0 00:11:44.340 inet 192.168.100.8/24 scope global mlx_0_0 00:11:44.340 valid_lft forever preferred_lft forever 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:44.340 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:44.340 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:44.340 altname enp218s0f1np1 00:11:44.340 altname ens818f1np1 00:11:44.340 inet 192.168.100.9/24 scope global mlx_0_1 00:11:44.340 valid_lft forever preferred_lft forever 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:44.340 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:44.600 192.168.100.9' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:44.600 192.168.100.9' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # head -n 1 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:44.600 192.168.100.9' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # tail -n +2 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # head -n 1 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=347960 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 347960 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 347960 ']' 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.600 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.600 [2024-11-06 08:48:07.479491] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:44.600 [2024-11-06 08:48:07.479544] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.600 [2024-11-06 08:48:07.555247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.600 [2024-11-06 08:48:07.594919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.600 [2024-11-06 08:48:07.594952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.600 [2024-11-06 08:48:07.594958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.600 [2024-11-06 08:48:07.594964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.600 [2024-11-06 08:48:07.594968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.600 [2024-11-06 08:48:07.595531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.859 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.859 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:11:44.860 Unsupported transport: rdma 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:44.860 nvmf_trace.0 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:44.860 rmmod nvme_rdma 00:11:44.860 rmmod nvme_fabrics 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 347960 ']' 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 347960 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 347960 ']' 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 347960 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:44.860 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 347960 00:11:45.119 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:45.119 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:45.119 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 347960' 00:11:45.119 killing process with pid 347960 00:11:45.119 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 347960 00:11:45.119 08:48:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 347960 00:11:45.119 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:45.119 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:45.119 00:11:45.119 real 0m6.719s 00:11:45.119 user 0m2.555s 00:11:45.119 sys 0m4.695s 00:11:45.119 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.119 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:45.119 ************************************ 00:11:45.119 END TEST nvmf_zcopy 00:11:45.119 ************************************ 00:11:45.119 08:48:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:45.119 08:48:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:45.119 08:48:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.119 08:48:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.119 ************************************ 00:11:45.119 START TEST nvmf_nmic 00:11:45.119 ************************************ 00:11:45.119 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:45.379 * Looking for test storage... 00:11:45.379 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # lcov --version 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:45.379 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:45.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.380 --rc genhtml_branch_coverage=1 00:11:45.380 --rc genhtml_function_coverage=1 00:11:45.380 --rc genhtml_legend=1 00:11:45.380 --rc geninfo_all_blocks=1 00:11:45.380 --rc geninfo_unexecuted_blocks=1 00:11:45.380 00:11:45.380 ' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:45.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.380 --rc genhtml_branch_coverage=1 00:11:45.380 --rc genhtml_function_coverage=1 00:11:45.380 --rc genhtml_legend=1 00:11:45.380 --rc geninfo_all_blocks=1 00:11:45.380 --rc geninfo_unexecuted_blocks=1 00:11:45.380 00:11:45.380 ' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:45.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.380 --rc genhtml_branch_coverage=1 00:11:45.380 --rc genhtml_function_coverage=1 00:11:45.380 --rc genhtml_legend=1 00:11:45.380 --rc geninfo_all_blocks=1 00:11:45.380 --rc geninfo_unexecuted_blocks=1 00:11:45.380 00:11:45.380 ' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:45.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.380 --rc genhtml_branch_coverage=1 00:11:45.380 --rc genhtml_function_coverage=1 00:11:45.380 --rc genhtml_legend=1 00:11:45.380 --rc geninfo_all_blocks=1 00:11:45.380 --rc geninfo_unexecuted_blocks=1 00:11:45.380 00:11:45.380 ' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.380 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.380 08:48:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:51.955 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:51.955 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.955 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:51.956 Found net devices under 0000:da:00.0: mlx_0_0 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:51.956 Found net devices under 0000:da:00.1: mlx_0_1 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # rdma_device_init 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.956 08:48:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:51.956 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.956 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:51.956 altname enp218s0f0np0 00:11:51.956 altname ens818f0np0 00:11:51.956 inet 192.168.100.8/24 scope global mlx_0_0 00:11:51.956 valid_lft forever preferred_lft forever 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:51.956 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.956 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:51.956 altname enp218s0f1np1 00:11:51.956 altname ens818f1np1 00:11:51.956 inet 192.168.100.9/24 scope global mlx_0_1 00:11:51.956 valid_lft forever preferred_lft forever 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:51.956 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:51.957 192.168.100.9' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:51.957 192.168.100.9' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # head -n 1 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:51.957 192.168.100.9' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # tail -n +2 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # head -n 1 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=351662 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 351662 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 351662 ']' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 [2024-11-06 08:48:14.209605] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:51.957 [2024-11-06 08:48:14.209660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.957 [2024-11-06 08:48:14.288524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.957 [2024-11-06 08:48:14.332561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.957 [2024-11-06 08:48:14.332596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.957 [2024-11-06 08:48:14.332603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.957 [2024-11-06 08:48:14.332609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.957 [2024-11-06 08:48:14.332615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.957 [2024-11-06 08:48:14.334116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.957 [2024-11-06 08:48:14.334239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.957 [2024-11-06 08:48:14.334288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.957 [2024-11-06 08:48:14.334289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 [2024-11-06 08:48:14.492606] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10f1da0/0x10f6290) succeed. 00:11:51.957 [2024-11-06 08:48:14.501627] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10f3430/0x1137930) succeed. 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 Malloc0 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 [2024-11-06 08:48:14.685301] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:51.957 test case1: single bdev can't be used in multiple subsystems 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.957 [2024-11-06 08:48:14.713074] bdev.c:8456:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:51.957 [2024-11-06 08:48:14.713092] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:51.957 [2024-11-06 08:48:14.713099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.957 request: 00:11:51.957 { 00:11:51.957 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:51.957 "namespace": { 00:11:51.957 "bdev_name": "Malloc0", 00:11:51.957 "no_auto_visible": false, 00:11:51.957 "no_metadata": false 00:11:51.957 }, 00:11:51.957 "method": "nvmf_subsystem_add_ns", 00:11:51.957 "req_id": 1 00:11:51.957 } 00:11:51.957 Got JSON-RPC error response 00:11:51.957 response: 00:11:51.957 { 00:11:51.957 "code": -32602, 00:11:51.957 "message": "Invalid parameters" 00:11:51.957 } 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:51.957 Adding namespace failed - expected result. 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:51.957 test case2: host connect to nvmf target in multiple paths 00:11:51.957 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:11:51.958 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.958 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.958 [2024-11-06 08:48:14.725127] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:11:51.958 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.958 08:48:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:52.895 08:48:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:11:53.832 08:48:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.832 08:48:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:53.832 08:48:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.832 08:48:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:53.832 08:48:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:55.736 08:48:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:55.736 08:48:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:55.736 08:48:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.736 08:48:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:55.736 08:48:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.736 08:48:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:55.736 08:48:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:55.995 [global] 00:11:55.995 thread=1 00:11:55.995 invalidate=1 00:11:55.995 rw=write 00:11:55.995 time_based=1 00:11:55.995 runtime=1 00:11:55.995 ioengine=libaio 00:11:55.995 direct=1 00:11:55.995 bs=4096 00:11:55.995 iodepth=1 00:11:55.995 norandommap=0 00:11:55.995 numjobs=1 00:11:55.995 00:11:55.995 verify_dump=1 00:11:55.995 verify_backlog=512 00:11:55.995 verify_state_save=0 00:11:55.995 do_verify=1 00:11:55.995 verify=crc32c-intel 00:11:55.995 [job0] 00:11:55.995 filename=/dev/nvme0n1 00:11:55.995 Could not set queue depth (nvme0n1) 00:11:56.561 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.561 fio-3.35 00:11:56.561 Starting 1 thread 00:11:57.503 00:11:57.503 job0: (groupid=0, jobs=1): err= 0: pid=352665: Wed Nov 6 08:48:20 2024 00:11:57.503 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:11:57.503 slat (nsec): min=6162, max=33586, avg=6856.45, stdev=829.18 00:11:57.503 clat (usec): min=48, max=100, avg=59.49, stdev= 3.75 00:11:57.503 lat (usec): min=57, max=107, avg=66.34, stdev= 3.82 00:11:57.503 clat percentiles (nsec): 00:11:57.503 | 1.00th=[52480], 5.00th=[54016], 10.00th=[54528], 20.00th=[56064], 00:11:57.503 | 30.00th=[57600], 40.00th=[58112], 50.00th=[59136], 60.00th=[60160], 00:11:57.503 | 70.00th=[61184], 80.00th=[62720], 90.00th=[64256], 95.00th=[66048], 00:11:57.503 | 99.00th=[69120], 99.50th=[70144], 99.90th=[75264], 99.95th=[81408], 00:11:57.503 | 99.99th=[99840] 00:11:57.503 write: IOPS=7482, BW=29.2MiB/s (30.6MB/s)(29.3MiB/1001msec); 0 zone resets 00:11:57.503 slat (nsec): min=8190, max=45501, avg=9003.52, stdev=916.30 00:11:57.503 clat (usec): min=47, max=203, avg=57.20, stdev= 5.04 00:11:57.503 lat (usec): min=56, max=212, avg=66.21, stdev= 5.16 00:11:57.503 clat percentiles (usec): 00:11:57.503 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 52], 20.00th=[ 54], 00:11:57.503 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:11:57.503 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 63], 95.00th=[ 64], 00:11:57.503 | 99.00th=[ 70], 99.50th=[ 77], 99.90th=[ 105], 99.95th=[ 128], 00:11:57.503 | 99.99th=[ 204] 00:11:57.503 bw ( KiB/s): min=30216, max=30216, per=100.00%, avg=30216.00, stdev= 0.00, samples=1 00:11:57.503 iops : min= 7554, max= 7554, avg=7554.00, stdev= 0.00, samples=1 00:11:57.503 lat (usec) : 50=0.72%, 100=99.21%, 250=0.07% 00:11:57.503 cpu : usr=5.90%, sys=12.40%, ctx=14658, majf=0, minf=1 00:11:57.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.503 issued rwts: total=7168,7490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.503 00:11:57.503 Run status group 0 (all jobs): 00:11:57.503 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:11:57.503 WRITE: bw=29.2MiB/s (30.6MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=29.3MiB (30.7MB), run=1001-1001msec 00:11:57.503 00:11:57.503 Disk stats (read/write): 00:11:57.503 nvme0n1: ios=6551/6656, merge=0/0, ticks=384/355, in_queue=739, util=90.58% 00:11:57.503 08:48:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.407 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:59.407 rmmod nvme_rdma 00:11:59.407 rmmod nvme_fabrics 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 351662 ']' 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 351662 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 351662 ']' 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 351662 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 351662 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 351662' 00:11:59.666 killing process with pid 351662 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 351662 00:11:59.666 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 351662 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:59.925 00:11:59.925 real 0m14.659s 00:11:59.925 user 0m40.720s 00:11:59.925 sys 0m5.431s 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:59.925 ************************************ 00:11:59.925 END TEST nvmf_nmic 00:11:59.925 ************************************ 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:59.925 ************************************ 00:11:59.925 START TEST nvmf_fio_target 00:11:59.925 ************************************ 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:59.925 * Looking for test storage... 00:11:59.925 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lcov --version 00:11:59.925 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:00.185 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:00.185 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.185 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.185 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.185 08:48:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:00.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.185 --rc genhtml_branch_coverage=1 00:12:00.185 --rc genhtml_function_coverage=1 00:12:00.185 --rc genhtml_legend=1 00:12:00.185 --rc geninfo_all_blocks=1 00:12:00.185 --rc geninfo_unexecuted_blocks=1 00:12:00.185 00:12:00.185 ' 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:00.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.185 --rc genhtml_branch_coverage=1 00:12:00.185 --rc genhtml_function_coverage=1 00:12:00.185 --rc genhtml_legend=1 00:12:00.185 --rc geninfo_all_blocks=1 00:12:00.185 --rc geninfo_unexecuted_blocks=1 00:12:00.185 00:12:00.185 ' 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:00.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.185 --rc genhtml_branch_coverage=1 00:12:00.185 --rc genhtml_function_coverage=1 00:12:00.185 --rc genhtml_legend=1 00:12:00.185 --rc geninfo_all_blocks=1 00:12:00.185 --rc geninfo_unexecuted_blocks=1 00:12:00.185 00:12:00.185 ' 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:00.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.185 --rc genhtml_branch_coverage=1 00:12:00.185 --rc genhtml_function_coverage=1 00:12:00.185 --rc genhtml_legend=1 00:12:00.185 --rc geninfo_all_blocks=1 00:12:00.185 --rc geninfo_unexecuted_blocks=1 00:12:00.185 00:12:00.185 ' 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.185 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.186 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.186 08:48:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:06.758 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:06.758 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:06.758 Found net devices under 0000:da:00.0: mlx_0_0 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:06.758 Found net devices under 0000:da:00.1: mlx_0_1 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # rdma_device_init 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:06.758 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:06.759 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:06.759 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:12:06.759 altname enp218s0f0np0 00:12:06.759 altname ens818f0np0 00:12:06.759 inet 192.168.100.8/24 scope global mlx_0_0 00:12:06.759 valid_lft forever preferred_lft forever 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:06.759 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:06.759 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:12:06.759 altname enp218s0f1np1 00:12:06.759 altname ens818f1np1 00:12:06.759 inet 192.168.100.9/24 scope global mlx_0_1 00:12:06.759 valid_lft forever preferred_lft forever 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:06.759 192.168.100.9' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:06.759 192.168.100.9' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # head -n 1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:06.759 192.168.100.9' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # tail -n +2 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # head -n 1 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=356354 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 356354 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 356354 ']' 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:06.759 08:48:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.759 [2024-11-06 08:48:29.017785] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:12:06.759 [2024-11-06 08:48:29.017838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.759 [2024-11-06 08:48:29.094695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.760 [2024-11-06 08:48:29.137978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.760 [2024-11-06 08:48:29.138015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.760 [2024-11-06 08:48:29.138022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.760 [2024-11-06 08:48:29.138028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.760 [2024-11-06 08:48:29.138033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.760 [2024-11-06 08:48:29.139474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.760 [2024-11-06 08:48:29.139582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.760 [2024-11-06 08:48:29.139687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.760 [2024-11-06 08:48:29.139688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.760 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.760 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:06.760 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:06.760 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:06.760 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.760 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.760 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:06.760 [2024-11-06 08:48:29.465857] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11beda0/0x11c3290) succeed. 00:12:06.760 [2024-11-06 08:48:29.475123] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11c0430/0x1204930) succeed. 00:12:06.760 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.018 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:07.018 08:48:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.276 08:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:07.276 08:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.276 08:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:07.276 08:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.535 08:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:07.535 08:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:07.794 08:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.052 08:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:08.052 08:48:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.310 08:48:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:08.310 08:48:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.310 08:48:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:08.310 08:48:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:08.568 08:48:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.825 08:48:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:08.825 08:48:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.083 08:48:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:09.083 08:48:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:09.341 08:48:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:09.341 [2024-11-06 08:48:32.300907] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:09.341 08:48:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:09.599 08:48:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:09.857 08:48:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:10.792 08:48:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:10.792 08:48:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:10.792 08:48:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.792 08:48:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:10.792 08:48:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:10.792 08:48:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:12.693 08:48:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:12.951 08:48:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:12.951 08:48:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.951 08:48:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:12.951 08:48:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.951 08:48:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:12.951 08:48:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:12.951 [global] 00:12:12.951 thread=1 00:12:12.951 invalidate=1 00:12:12.951 rw=write 00:12:12.951 time_based=1 00:12:12.951 runtime=1 00:12:12.951 ioengine=libaio 00:12:12.951 direct=1 00:12:12.951 bs=4096 00:12:12.951 iodepth=1 00:12:12.951 norandommap=0 00:12:12.951 numjobs=1 00:12:12.951 00:12:12.951 verify_dump=1 00:12:12.951 verify_backlog=512 00:12:12.951 verify_state_save=0 00:12:12.951 do_verify=1 00:12:12.951 verify=crc32c-intel 00:12:12.951 [job0] 00:12:12.951 filename=/dev/nvme0n1 00:12:12.951 [job1] 00:12:12.951 filename=/dev/nvme0n2 00:12:12.951 [job2] 00:12:12.951 filename=/dev/nvme0n3 00:12:12.951 [job3] 00:12:12.951 filename=/dev/nvme0n4 00:12:12.951 Could not set queue depth (nvme0n1) 00:12:12.951 Could not set queue depth (nvme0n2) 00:12:12.951 Could not set queue depth (nvme0n3) 00:12:12.951 Could not set queue depth (nvme0n4) 00:12:13.210 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.210 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.210 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.210 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.210 fio-3.35 00:12:13.210 Starting 4 threads 00:12:14.593 00:12:14.593 job0: (groupid=0, jobs=1): err= 0: pid=357815: Wed Nov 6 08:48:37 2024 00:12:14.593 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:12:14.593 slat (nsec): min=5905, max=41984, avg=6891.41, stdev=1109.15 00:12:14.593 clat (usec): min=65, max=212, avg=82.15, stdev=15.08 00:12:14.593 lat (usec): min=72, max=219, avg=89.04, stdev=15.19 00:12:14.593 clat percentiles (usec): 00:12:14.593 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:12:14.593 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 81], 00:12:14.593 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 89], 95.00th=[ 94], 00:12:14.593 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 190], 99.95th=[ 196], 00:12:14.593 | 99.99th=[ 212] 00:12:14.593 write: IOPS=5500, BW=21.5MiB/s (22.5MB/s)(21.5MiB/1001msec); 0 zone resets 00:12:14.593 slat (nsec): min=8031, max=47623, avg=9087.07, stdev=1071.24 00:12:14.593 clat (usec): min=53, max=205, avg=85.42, stdev=23.35 00:12:14.593 lat (usec): min=70, max=213, avg=94.51, stdev=23.43 00:12:14.593 clat percentiles (usec): 00:12:14.593 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:12:14.593 | 30.00th=[ 74], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 79], 00:12:14.593 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 133], 95.00th=[ 143], 00:12:14.593 | 99.00th=[ 159], 99.50th=[ 172], 99.90th=[ 192], 99.95th=[ 202], 00:12:14.593 | 99.99th=[ 206] 00:12:14.593 bw ( KiB/s): min=20672, max=20672, per=28.50%, avg=20672.00, stdev= 0.00, samples=1 00:12:14.593 iops : min= 5168, max= 5168, avg=5168.00, stdev= 0.00, samples=1 00:12:14.593 lat (usec) : 100=90.25%, 250=9.75% 00:12:14.593 cpu : usr=6.70%, sys=9.60%, ctx=10626, majf=0, minf=1 00:12:14.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.593 issued rwts: total=5120,5506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.593 job1: (groupid=0, jobs=1): err= 0: pid=357827: Wed Nov 6 08:48:37 2024 00:12:14.593 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:14.593 slat (nsec): min=6478, max=21577, avg=7458.65, stdev=729.27 00:12:14.593 clat (usec): min=64, max=201, avg=126.56, stdev=24.28 00:12:14.593 lat (usec): min=71, max=209, avg=134.01, stdev=24.31 00:12:14.593 clat percentiles (usec): 00:12:14.593 | 1.00th=[ 74], 5.00th=[ 80], 10.00th=[ 87], 20.00th=[ 114], 00:12:14.593 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:12:14.593 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 155], 95.00th=[ 172], 00:12:14.593 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 196], 99.95th=[ 200], 00:12:14.593 | 99.99th=[ 202] 00:12:14.593 write: IOPS=3870, BW=15.1MiB/s (15.9MB/s)(15.1MiB/1001msec); 0 zone resets 00:12:14.593 slat (nsec): min=8090, max=43430, avg=9290.04, stdev=1230.94 00:12:14.593 clat (usec): min=62, max=209, avg=120.67, stdev=25.83 00:12:14.593 lat (usec): min=71, max=218, avg=129.96, stdev=25.96 00:12:14.593 clat percentiles (usec): 00:12:14.593 | 1.00th=[ 70], 5.00th=[ 76], 10.00th=[ 80], 20.00th=[ 102], 00:12:14.593 | 30.00th=[ 114], 40.00th=[ 119], 50.00th=[ 123], 60.00th=[ 126], 00:12:14.593 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 157], 95.00th=[ 165], 00:12:14.593 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 196], 99.95th=[ 200], 00:12:14.593 | 99.99th=[ 210] 00:12:14.594 bw ( KiB/s): min=16384, max=16384, per=22.59%, avg=16384.00, stdev= 0.00, samples=1 00:12:14.594 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:14.594 lat (usec) : 100=17.77%, 250=82.23% 00:12:14.594 cpu : usr=4.80%, sys=8.00%, ctx=7458, majf=0, minf=1 00:12:14.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.594 issued rwts: total=3584,3874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.594 job2: (groupid=0, jobs=1): err= 0: pid=357843: Wed Nov 6 08:48:37 2024 00:12:14.594 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:14.594 slat (nsec): min=6684, max=31982, avg=7635.12, stdev=1177.34 00:12:14.594 clat (usec): min=73, max=209, avg=130.46, stdev=15.96 00:12:14.594 lat (usec): min=80, max=217, avg=138.09, stdev=15.97 00:12:14.594 clat percentiles (usec): 00:12:14.594 | 1.00th=[ 91], 5.00th=[ 100], 10.00th=[ 115], 20.00th=[ 122], 00:12:14.594 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:12:14.594 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 161], 00:12:14.594 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 200], 00:12:14.594 | 99.99th=[ 210] 00:12:14.594 write: IOPS=3664, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1001msec); 0 zone resets 00:12:14.594 slat (nsec): min=8289, max=44703, avg=9231.44, stdev=1118.87 00:12:14.594 clat (usec): min=71, max=202, avg=124.56, stdev=18.01 00:12:14.594 lat (usec): min=80, max=214, avg=133.79, stdev=18.08 00:12:14.594 clat percentiles (usec): 00:12:14.594 | 1.00th=[ 81], 5.00th=[ 91], 10.00th=[ 104], 20.00th=[ 115], 00:12:14.594 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 127], 00:12:14.594 | 70.00th=[ 131], 80.00th=[ 137], 90.00th=[ 149], 95.00th=[ 157], 00:12:14.594 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 200], 99.95th=[ 202], 00:12:14.594 | 99.99th=[ 204] 00:12:14.594 bw ( KiB/s): min=16384, max=16384, per=22.59%, avg=16384.00, stdev= 0.00, samples=1 00:12:14.594 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:14.594 lat (usec) : 100=6.83%, 250=93.17% 00:12:14.594 cpu : usr=4.40%, sys=8.00%, ctx=7252, majf=0, minf=1 00:12:14.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.594 issued rwts: total=3584,3668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.594 job3: (groupid=0, jobs=1): err= 0: pid=357848: Wed Nov 6 08:48:37 2024 00:12:14.594 read: IOPS=4803, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1002msec) 00:12:14.594 slat (nsec): min=6350, max=25280, avg=7312.43, stdev=786.22 00:12:14.594 clat (usec): min=74, max=125, avg=92.31, stdev= 6.72 00:12:14.594 lat (usec): min=81, max=133, avg=99.62, stdev= 6.77 00:12:14.594 clat percentiles (usec): 00:12:14.594 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:12:14.594 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 94], 00:12:14.594 | 70.00th=[ 96], 80.00th=[ 98], 90.00th=[ 101], 95.00th=[ 105], 00:12:14.594 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 122], 00:12:14.594 | 99.99th=[ 126] 00:12:14.594 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:12:14.594 slat (nsec): min=8252, max=39975, avg=9004.55, stdev=918.08 00:12:14.594 clat (usec): min=69, max=283, avg=88.99, stdev= 7.44 00:12:14.594 lat (usec): min=78, max=291, avg=97.99, stdev= 7.52 00:12:14.594 clat percentiles (usec): 00:12:14.594 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 84], 00:12:14.594 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 90], 00:12:14.594 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 98], 95.00th=[ 101], 00:12:14.594 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 123], 99.95th=[ 137], 00:12:14.594 | 99.99th=[ 285] 00:12:14.594 bw ( KiB/s): min=20480, max=20480, per=28.24%, avg=20480.00, stdev= 0.00, samples=1 00:12:14.594 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:14.594 lat (usec) : 100=90.94%, 250=9.05%, 500=0.01% 00:12:14.594 cpu : usr=4.70%, sys=11.59%, ctx=9934, majf=0, minf=1 00:12:14.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.594 issued rwts: total=4813,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.594 00:12:14.594 Run status group 0 (all jobs): 00:12:14.594 READ: bw=66.7MiB/s (69.9MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=66.8MiB (70.0MB), run=1001-1002msec 00:12:14.594 WRITE: bw=70.8MiB/s (74.3MB/s), 14.3MiB/s-21.5MiB/s (15.0MB/s-22.5MB/s), io=71.0MiB (74.4MB), run=1001-1002msec 00:12:14.594 00:12:14.594 Disk stats (read/write): 00:12:14.594 nvme0n1: ios=4646/4608, merge=0/0, ticks=349/347, in_queue=696, util=86.47% 00:12:14.594 nvme0n2: ios=3085/3130, merge=0/0, ticks=396/350, in_queue=746, util=87.21% 00:12:14.594 nvme0n3: ios=3072/3131, merge=0/0, ticks=381/362, in_queue=743, util=89.10% 00:12:14.594 nvme0n4: ios=4096/4423, merge=0/0, ticks=354/360, in_queue=714, util=89.76% 00:12:14.594 08:48:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:14.594 [global] 00:12:14.594 thread=1 00:12:14.594 invalidate=1 00:12:14.594 rw=randwrite 00:12:14.594 time_based=1 00:12:14.594 runtime=1 00:12:14.594 ioengine=libaio 00:12:14.594 direct=1 00:12:14.594 bs=4096 00:12:14.594 iodepth=1 00:12:14.594 norandommap=0 00:12:14.594 numjobs=1 00:12:14.594 00:12:14.594 verify_dump=1 00:12:14.594 verify_backlog=512 00:12:14.594 verify_state_save=0 00:12:14.594 do_verify=1 00:12:14.594 verify=crc32c-intel 00:12:14.594 [job0] 00:12:14.594 filename=/dev/nvme0n1 00:12:14.594 [job1] 00:12:14.594 filename=/dev/nvme0n2 00:12:14.594 [job2] 00:12:14.594 filename=/dev/nvme0n3 00:12:14.594 [job3] 00:12:14.594 filename=/dev/nvme0n4 00:12:14.594 Could not set queue depth (nvme0n1) 00:12:14.594 Could not set queue depth (nvme0n2) 00:12:14.594 Could not set queue depth (nvme0n3) 00:12:14.594 Could not set queue depth (nvme0n4) 00:12:14.851 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.851 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.851 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.851 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.851 fio-3.35 00:12:14.851 Starting 4 threads 00:12:16.229 00:12:16.229 job0: (groupid=0, jobs=1): err= 0: pid=358254: Wed Nov 6 08:48:38 2024 00:12:16.230 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:16.230 slat (nsec): min=5111, max=35158, avg=7489.37, stdev=1803.74 00:12:16.230 clat (usec): min=71, max=252, avg=147.64, stdev=19.00 00:12:16.230 lat (usec): min=79, max=260, avg=155.13, stdev=19.00 00:12:16.230 clat percentiles (usec): 00:12:16.230 | 1.00th=[ 85], 5.00th=[ 115], 10.00th=[ 124], 20.00th=[ 139], 00:12:16.230 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:12:16.230 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 174], 00:12:16.230 | 99.00th=[ 198], 99.50th=[ 210], 99.90th=[ 223], 99.95th=[ 223], 00:12:16.230 | 99.99th=[ 253] 00:12:16.230 write: IOPS=3412, BW=13.3MiB/s (14.0MB/s)(13.3MiB/1001msec); 0 zone resets 00:12:16.230 slat (nsec): min=6673, max=53701, avg=9595.46, stdev=2323.33 00:12:16.230 clat (usec): min=64, max=245, avg=139.30, stdev=21.76 00:12:16.230 lat (usec): min=71, max=263, avg=148.90, stdev=21.83 00:12:16.230 clat percentiles (usec): 00:12:16.230 | 1.00th=[ 81], 5.00th=[ 101], 10.00th=[ 112], 20.00th=[ 122], 00:12:16.230 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:12:16.230 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 174], 00:12:16.230 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 223], 99.95th=[ 237], 00:12:16.230 | 99.99th=[ 245] 00:12:16.230 bw ( KiB/s): min=13880, max=13880, per=22.28%, avg=13880.00, stdev= 0.00, samples=1 00:12:16.230 iops : min= 3470, max= 3470, avg=3470.00, stdev= 0.00, samples=1 00:12:16.230 lat (usec) : 100=3.78%, 250=96.21%, 500=0.02% 00:12:16.230 cpu : usr=4.30%, sys=6.50%, ctx=6488, majf=0, minf=1 00:12:16.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.230 issued rwts: total=3072,3416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.230 job1: (groupid=0, jobs=1): err= 0: pid=358255: Wed Nov 6 08:48:38 2024 00:12:16.230 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:16.230 slat (nsec): min=6598, max=20909, avg=7543.56, stdev=1116.10 00:12:16.230 clat (usec): min=74, max=223, avg=146.95, stdev=18.35 00:12:16.230 lat (usec): min=82, max=230, avg=154.49, stdev=18.36 00:12:16.230 clat percentiles (usec): 00:12:16.230 | 1.00th=[ 89], 5.00th=[ 113], 10.00th=[ 122], 20.00th=[ 139], 00:12:16.230 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:12:16.230 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:12:16.230 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 217], 00:12:16.230 | 99.99th=[ 225] 00:12:16.230 write: IOPS=3464, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1001msec); 0 zone resets 00:12:16.230 slat (nsec): min=7982, max=62570, avg=9084.53, stdev=1671.50 00:12:16.230 clat (usec): min=61, max=221, avg=138.69, stdev=21.57 00:12:16.230 lat (usec): min=70, max=229, avg=147.78, stdev=21.69 00:12:16.230 clat percentiles (usec): 00:12:16.230 | 1.00th=[ 81], 5.00th=[ 98], 10.00th=[ 110], 20.00th=[ 121], 00:12:16.230 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:12:16.230 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 169], 00:12:16.230 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 217], 99.95th=[ 219], 00:12:16.230 | 99.99th=[ 221] 00:12:16.230 bw ( KiB/s): min=14184, max=14184, per=22.77%, avg=14184.00, stdev= 0.00, samples=1 00:12:16.230 iops : min= 3546, max= 3546, avg=3546.00, stdev= 0.00, samples=1 00:12:16.230 lat (usec) : 100=4.16%, 250=95.84% 00:12:16.230 cpu : usr=3.60%, sys=7.40%, ctx=6540, majf=0, minf=1 00:12:16.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.230 issued rwts: total=3072,3468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.230 job2: (groupid=0, jobs=1): err= 0: pid=358256: Wed Nov 6 08:48:38 2024 00:12:16.230 read: IOPS=3929, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1001msec) 00:12:16.230 slat (nsec): min=6175, max=23824, avg=7740.27, stdev=1575.25 00:12:16.230 clat (usec): min=69, max=213, avg=113.28, stdev=32.82 00:12:16.230 lat (usec): min=77, max=228, avg=121.02, stdev=33.03 00:12:16.230 clat percentiles (usec): 00:12:16.230 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:12:16.230 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 95], 60.00th=[ 125], 00:12:16.230 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 165], 00:12:16.230 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 204], 99.95th=[ 208], 00:12:16.230 | 99.99th=[ 215] 00:12:16.230 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:12:16.230 slat (nsec): min=8058, max=57141, avg=9595.56, stdev=2086.11 00:12:16.230 clat (usec): min=67, max=220, avg=114.08, stdev=35.99 00:12:16.230 lat (usec): min=76, max=229, avg=123.68, stdev=36.59 00:12:16.230 clat percentiles (usec): 00:12:16.230 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:12:16.230 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 137], 00:12:16.230 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 167], 00:12:16.230 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 215], 99.95th=[ 217], 00:12:16.230 | 99.99th=[ 221] 00:12:16.230 bw ( KiB/s): min=20480, max=20480, per=32.88%, avg=20480.00, stdev= 0.00, samples=1 00:12:16.230 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:16.230 lat (usec) : 100=52.26%, 250=47.74% 00:12:16.230 cpu : usr=5.20%, sys=8.30%, ctx=8029, majf=0, minf=1 00:12:16.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.230 issued rwts: total=3933,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.230 job3: (groupid=0, jobs=1): err= 0: pid=358257: Wed Nov 6 08:48:38 2024 00:12:16.230 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:16.230 slat (nsec): min=6460, max=30414, avg=7375.23, stdev=848.77 00:12:16.230 clat (usec): min=69, max=216, avg=100.70, stdev=29.63 00:12:16.230 lat (usec): min=78, max=224, avg=108.08, stdev=29.72 00:12:16.230 clat percentiles (usec): 00:12:16.230 | 1.00th=[ 76], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:12:16.230 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 89], 00:12:16.230 | 70.00th=[ 93], 80.00th=[ 127], 90.00th=[ 155], 95.00th=[ 161], 00:12:16.230 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 210], 99.95th=[ 212], 00:12:16.230 | 99.99th=[ 217] 00:12:16.230 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:12:16.230 slat (nsec): min=8034, max=34379, avg=8733.67, stdev=826.46 00:12:16.230 clat (usec): min=67, max=202, avg=96.29, stdev=28.09 00:12:16.230 lat (usec): min=75, max=210, avg=105.02, stdev=28.19 00:12:16.230 clat percentiles (usec): 00:12:16.230 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 79], 00:12:16.230 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 86], 00:12:16.230 | 70.00th=[ 90], 80.00th=[ 131], 90.00th=[ 145], 95.00th=[ 153], 00:12:16.230 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 198], 99.95th=[ 200], 00:12:16.230 | 99.99th=[ 202] 00:12:16.230 bw ( KiB/s): min=16384, max=16384, per=26.30%, avg=16384.00, stdev= 0.00, samples=1 00:12:16.230 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:16.230 lat (usec) : 100=74.91%, 250=25.09% 00:12:16.230 cpu : usr=5.10%, sys=9.90%, ctx=9216, majf=0, minf=1 00:12:16.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.230 issued rwts: total=4608,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.230 00:12:16.230 Run status group 0 (all jobs): 00:12:16.230 READ: bw=57.3MiB/s (60.1MB/s), 12.0MiB/s-18.0MiB/s (12.6MB/s-18.9MB/s), io=57.4MiB (60.1MB), run=1001-1001msec 00:12:16.230 WRITE: bw=60.8MiB/s (63.8MB/s), 13.3MiB/s-18.0MiB/s (14.0MB/s-18.9MB/s), io=60.9MiB (63.8MB), run=1001-1001msec 00:12:16.230 00:12:16.230 Disk stats (read/write): 00:12:16.230 nvme0n1: ios=2610/3000, merge=0/0, ticks=379/377, in_queue=756, util=86.87% 00:12:16.230 nvme0n2: ios=2568/3072, merge=0/0, ticks=350/394, in_queue=744, util=87.23% 00:12:16.230 nvme0n3: ios=3518/3584, merge=0/0, ticks=357/369, in_queue=726, util=89.13% 00:12:16.230 nvme0n4: ios=3720/4096, merge=0/0, ticks=354/379, in_queue=733, util=89.69% 00:12:16.230 08:48:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:16.230 [global] 00:12:16.230 thread=1 00:12:16.230 invalidate=1 00:12:16.230 rw=write 00:12:16.230 time_based=1 00:12:16.230 runtime=1 00:12:16.230 ioengine=libaio 00:12:16.230 direct=1 00:12:16.230 bs=4096 00:12:16.230 iodepth=128 00:12:16.230 norandommap=0 00:12:16.230 numjobs=1 00:12:16.230 00:12:16.230 verify_dump=1 00:12:16.230 verify_backlog=512 00:12:16.230 verify_state_save=0 00:12:16.230 do_verify=1 00:12:16.230 verify=crc32c-intel 00:12:16.230 [job0] 00:12:16.230 filename=/dev/nvme0n1 00:12:16.230 [job1] 00:12:16.230 filename=/dev/nvme0n2 00:12:16.230 [job2] 00:12:16.230 filename=/dev/nvme0n3 00:12:16.230 [job3] 00:12:16.230 filename=/dev/nvme0n4 00:12:16.230 Could not set queue depth (nvme0n1) 00:12:16.230 Could not set queue depth (nvme0n2) 00:12:16.230 Could not set queue depth (nvme0n3) 00:12:16.230 Could not set queue depth (nvme0n4) 00:12:16.230 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.231 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.231 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.231 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.231 fio-3.35 00:12:16.231 Starting 4 threads 00:12:17.605 00:12:17.605 job0: (groupid=0, jobs=1): err= 0: pid=358625: Wed Nov 6 08:48:40 2024 00:12:17.605 read: IOPS=3871, BW=15.1MiB/s (15.9MB/s)(15.1MiB/1001msec) 00:12:17.605 slat (nsec): min=1420, max=4108.6k, avg=118944.73, stdev=427422.81 00:12:17.605 clat (usec): min=399, max=20120, avg=15225.93, stdev=6094.97 00:12:17.605 lat (usec): min=759, max=20241, avg=15344.87, stdev=6125.54 00:12:17.605 clat percentiles (usec): 00:12:17.605 | 1.00th=[ 1663], 5.00th=[ 3163], 10.00th=[ 4883], 20.00th=[ 6325], 00:12:17.605 | 30.00th=[13829], 40.00th=[18220], 50.00th=[19006], 60.00th=[19268], 00:12:17.605 | 70.00th=[19530], 80.00th=[19530], 90.00th=[19792], 95.00th=[19792], 00:12:17.605 | 99.00th=[20055], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:12:17.605 | 99.99th=[20055] 00:12:17.605 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:12:17.605 slat (usec): min=2, max=3893, avg=124.64, stdev=428.13 00:12:17.605 clat (usec): min=611, max=20129, avg=16451.30, stdev=3878.85 00:12:17.605 lat (usec): min=683, max=20236, avg=16575.94, stdev=3891.37 00:12:17.605 clat percentiles (usec): 00:12:17.605 | 1.00th=[ 3425], 5.00th=[ 5276], 10.00th=[10159], 20.00th=[15401], 00:12:17.605 | 30.00th=[16450], 40.00th=[17957], 50.00th=[18220], 60.00th=[18220], 00:12:17.605 | 70.00th=[18482], 80.00th=[18482], 90.00th=[18744], 95.00th=[19006], 00:12:17.605 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:12:17.605 | 99.99th=[20055] 00:12:17.605 bw ( KiB/s): min=14080, max=14080, per=16.05%, avg=14080.00, stdev= 0.00, samples=1 00:12:17.605 iops : min= 3520, max= 3520, avg=3520.00, stdev= 0.00, samples=1 00:12:17.605 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.04% 00:12:17.605 lat (msec) : 2=0.77%, 4=3.37%, 10=12.78%, 20=82.21%, 50=0.79% 00:12:17.605 cpu : usr=1.80%, sys=4.00%, ctx=1324, majf=0, minf=1 00:12:17.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:17.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.605 issued rwts: total=3875,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.605 job1: (groupid=0, jobs=1): err= 0: pid=358626: Wed Nov 6 08:48:40 2024 00:12:17.605 read: IOPS=10.4k, BW=40.8MiB/s (42.8MB/s)(41.0MiB/1004msec) 00:12:17.605 slat (nsec): min=1345, max=5041.2k, avg=47498.49, stdev=187288.56 00:12:17.605 clat (usec): min=3064, max=22510, avg=6116.80, stdev=2353.15 00:12:17.605 lat (usec): min=3746, max=22512, avg=6164.30, stdev=2366.27 00:12:17.605 clat percentiles (usec): 00:12:17.605 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:12:17.605 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5669], 00:12:17.605 | 70.00th=[ 5735], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 7046], 00:12:17.605 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20317], 99.95th=[21890], 00:12:17.605 | 99.99th=[22414] 00:12:17.605 write: IOPS=10.7k, BW=41.8MiB/s (43.9MB/s)(42.0MiB/1004msec); 0 zone resets 00:12:17.605 slat (nsec): min=1895, max=2761.7k, avg=44128.88, stdev=160860.00 00:12:17.605 clat (usec): min=3922, max=17450, avg=5861.31, stdev=2168.42 00:12:17.605 lat (usec): min=3930, max=17799, avg=5905.43, stdev=2178.95 00:12:17.605 clat percentiles (usec): 00:12:17.605 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 5014], 20.00th=[ 5145], 00:12:17.605 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:12:17.605 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 6128], 95.00th=[ 9634], 00:12:17.605 | 99.00th=[16450], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:12:17.605 | 99.99th=[17433] 00:12:17.605 bw ( KiB/s): min=39624, max=46392, per=49.03%, avg=43008.00, stdev=4785.70, samples=2 00:12:17.605 iops : min= 9906, max=11598, avg=10752.00, stdev=1196.42, samples=2 00:12:17.605 lat (msec) : 4=0.07%, 10=95.61%, 20=4.21%, 50=0.11% 00:12:17.605 cpu : usr=4.29%, sys=5.58%, ctx=1623, majf=0, minf=2 00:12:17.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:17.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.605 issued rwts: total=10486,10752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.605 job2: (groupid=0, jobs=1): err= 0: pid=358627: Wed Nov 6 08:48:40 2024 00:12:17.605 read: IOPS=3343, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1004msec) 00:12:17.605 slat (nsec): min=1390, max=4504.0k, avg=146717.79, stdev=562766.51 00:12:17.605 clat (usec): min=3062, max=22779, avg=18861.75, stdev=2034.40 00:12:17.605 lat (usec): min=3753, max=22783, avg=19008.47, stdev=1962.70 00:12:17.605 clat percentiles (usec): 00:12:17.605 | 1.00th=[ 9634], 5.00th=[15533], 10.00th=[16319], 20.00th=[18482], 00:12:17.605 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:12:17.605 | 70.00th=[19792], 80.00th=[19792], 90.00th=[20055], 95.00th=[21103], 00:12:17.605 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22676], 99.95th=[22676], 00:12:17.605 | 99.99th=[22676] 00:12:17.605 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:12:17.605 slat (usec): min=2, max=3800, avg=137.67, stdev=501.26 00:12:17.605 clat (usec): min=7520, max=21181, avg=17665.38, stdev=1637.36 00:12:17.605 lat (usec): min=7527, max=22113, avg=17803.05, stdev=1574.11 00:12:17.605 clat percentiles (usec): 00:12:17.605 | 1.00th=[11731], 5.00th=[14615], 10.00th=[15139], 20.00th=[16909], 00:12:17.605 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:12:17.605 | 70.00th=[18482], 80.00th=[18744], 90.00th=[18744], 95.00th=[19006], 00:12:17.605 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:12:17.605 | 99.99th=[21103] 00:12:17.605 bw ( KiB/s): min=14272, max=14400, per=16.34%, avg=14336.00, stdev=90.51, samples=2 00:12:17.605 iops : min= 3568, max= 3600, avg=3584.00, stdev=22.63, samples=2 00:12:17.605 lat (msec) : 4=0.06%, 10=0.50%, 20=93.93%, 50=5.50% 00:12:17.605 cpu : usr=2.09%, sys=3.19%, ctx=1072, majf=0, minf=1 00:12:17.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:17.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.605 issued rwts: total=3357,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.605 job3: (groupid=0, jobs=1): err= 0: pid=358628: Wed Nov 6 08:48:40 2024 00:12:17.605 read: IOPS=3320, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:12:17.605 slat (nsec): min=1470, max=5174.4k, avg=147524.46, stdev=509682.15 00:12:17.605 clat (usec): min=3116, max=23851, avg=18836.96, stdev=2122.83 00:12:17.605 lat (usec): min=3811, max=23854, avg=18984.49, stdev=2072.38 00:12:17.605 clat percentiles (usec): 00:12:17.605 | 1.00th=[10159], 5.00th=[15533], 10.00th=[16319], 20.00th=[18482], 00:12:17.605 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19268], 60.00th=[19530], 00:12:17.605 | 70.00th=[19792], 80.00th=[19792], 90.00th=[20055], 95.00th=[21103], 00:12:17.605 | 99.00th=[22414], 99.50th=[22676], 99.90th=[23200], 99.95th=[23725], 00:12:17.605 | 99.99th=[23725] 00:12:17.605 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:12:17.605 slat (usec): min=2, max=3400, avg=138.61, stdev=463.36 00:12:17.605 clat (usec): min=12714, max=20678, avg=17817.81, stdev=1257.77 00:12:17.605 lat (usec): min=12717, max=20681, avg=17956.42, stdev=1183.00 00:12:17.605 clat percentiles (usec): 00:12:17.605 | 1.00th=[14615], 5.00th=[15270], 10.00th=[15533], 20.00th=[16909], 00:12:17.605 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:12:17.605 | 70.00th=[18482], 80.00th=[18482], 90.00th=[19006], 95.00th=[19006], 00:12:17.605 | 99.00th=[19530], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:12:17.605 | 99.99th=[20579] 00:12:17.605 bw ( KiB/s): min=14032, max=14640, per=16.34%, avg=14336.00, stdev=429.92, samples=2 00:12:17.605 iops : min= 3508, max= 3660, avg=3584.00, stdev=107.48, samples=2 00:12:17.605 lat (msec) : 4=0.12%, 10=0.36%, 20=95.03%, 50=4.50% 00:12:17.605 cpu : usr=2.09%, sys=2.59%, ctx=1565, majf=0, minf=1 00:12:17.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:17.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.606 issued rwts: total=3334,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.606 00:12:17.606 Run status group 0 (all jobs): 00:12:17.606 READ: bw=81.9MiB/s (85.9MB/s), 13.0MiB/s-40.8MiB/s (13.6MB/s-42.8MB/s), io=82.2MiB (86.2MB), run=1001-1004msec 00:12:17.606 WRITE: bw=85.7MiB/s (89.8MB/s), 13.9MiB/s-41.8MiB/s (14.6MB/s-43.9MB/s), io=86.0MiB (90.2MB), run=1001-1004msec 00:12:17.606 00:12:17.606 Disk stats (read/write): 00:12:17.606 nvme0n1: ios=3239/3584, merge=0/0, ticks=14167/16049, in_queue=30216, util=86.97% 00:12:17.606 nvme0n2: ios=9554/9728, merge=0/0, ticks=17429/16287, in_queue=33716, util=87.64% 00:12:17.606 nvme0n3: ios=2798/3072, merge=0/0, ticks=13444/13741, in_queue=27185, util=89.23% 00:12:17.606 nvme0n4: ios=2791/3072, merge=0/0, ticks=13467/13810, in_queue=27277, util=89.79% 00:12:17.606 08:48:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:17.606 [global] 00:12:17.606 thread=1 00:12:17.606 invalidate=1 00:12:17.606 rw=randwrite 00:12:17.606 time_based=1 00:12:17.606 runtime=1 00:12:17.606 ioengine=libaio 00:12:17.606 direct=1 00:12:17.606 bs=4096 00:12:17.606 iodepth=128 00:12:17.606 norandommap=0 00:12:17.606 numjobs=1 00:12:17.606 00:12:17.606 verify_dump=1 00:12:17.606 verify_backlog=512 00:12:17.606 verify_state_save=0 00:12:17.606 do_verify=1 00:12:17.606 verify=crc32c-intel 00:12:17.606 [job0] 00:12:17.606 filename=/dev/nvme0n1 00:12:17.606 [job1] 00:12:17.606 filename=/dev/nvme0n2 00:12:17.606 [job2] 00:12:17.606 filename=/dev/nvme0n3 00:12:17.606 [job3] 00:12:17.606 filename=/dev/nvme0n4 00:12:17.606 Could not set queue depth (nvme0n1) 00:12:17.606 Could not set queue depth (nvme0n2) 00:12:17.606 Could not set queue depth (nvme0n3) 00:12:17.606 Could not set queue depth (nvme0n4) 00:12:17.865 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.865 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.865 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.865 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.865 fio-3.35 00:12:17.865 Starting 4 threads 00:12:19.259 00:12:19.259 job0: (groupid=0, jobs=1): err= 0: pid=359007: Wed Nov 6 08:48:41 2024 00:12:19.259 read: IOPS=3327, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1004msec) 00:12:19.259 slat (nsec): min=1555, max=1685.8k, avg=147457.44, stdev=316936.91 00:12:19.259 clat (usec): min=3430, max=21177, avg=18793.17, stdev=1932.75 00:12:19.259 lat (usec): min=3879, max=21216, avg=18940.62, stdev=1915.46 00:12:19.259 clat percentiles (usec): 00:12:19.259 | 1.00th=[ 7898], 5.00th=[15664], 10.00th=[16909], 20.00th=[18744], 00:12:19.259 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:12:19.259 | 70.00th=[19530], 80.00th=[19530], 90.00th=[19792], 95.00th=[19792], 00:12:19.259 | 99.00th=[20317], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:12:19.259 | 99.99th=[21103] 00:12:19.259 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:12:19.259 slat (usec): min=2, max=1609, avg=137.03, stdev=292.34 00:12:19.259 clat (usec): min=14237, max=19385, avg=17845.57, stdev=1165.74 00:12:19.259 lat (usec): min=14331, max=19678, avg=17982.60, stdev=1140.65 00:12:19.260 clat percentiles (usec): 00:12:19.260 | 1.00th=[14746], 5.00th=[15139], 10.00th=[15270], 20.00th=[17433], 00:12:19.260 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:12:19.260 | 70.00th=[18482], 80.00th=[18482], 90.00th=[18744], 95.00th=[19006], 00:12:19.260 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:12:19.260 | 99.99th=[19268] 00:12:19.260 bw ( KiB/s): min=14040, max=14632, per=16.46%, avg=14336.00, stdev=418.61, samples=2 00:12:19.260 iops : min= 3510, max= 3658, avg=3584.00, stdev=104.65, samples=2 00:12:19.260 lat (msec) : 4=0.07%, 10=0.59%, 20=97.81%, 50=1.53% 00:12:19.260 cpu : usr=2.69%, sys=3.99%, ctx=2210, majf=0, minf=1 00:12:19.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:19.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.260 issued rwts: total=3341,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.260 job1: (groupid=0, jobs=1): err= 0: pid=359008: Wed Nov 6 08:48:41 2024 00:12:19.260 read: IOPS=10.7k, BW=41.8MiB/s (43.9MB/s)(42.0MiB/1004msec) 00:12:19.260 slat (nsec): min=1380, max=2908.3k, avg=45568.86, stdev=164164.21 00:12:19.260 clat (usec): min=4472, max=16283, avg=6063.82, stdev=2070.49 00:12:19.260 lat (usec): min=4980, max=16660, avg=6109.39, stdev=2082.14 00:12:19.260 clat percentiles (usec): 00:12:19.260 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5407], 00:12:19.260 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5669], 00:12:19.260 | 70.00th=[ 5735], 80.00th=[ 5800], 90.00th=[ 6063], 95.00th=[ 6587], 00:12:19.260 | 99.00th=[15664], 99.50th=[15795], 99.90th=[16188], 99.95th=[16188], 00:12:19.260 | 99.99th=[16319] 00:12:19.260 write: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(43.4MiB/1004msec); 0 zone resets 00:12:19.260 slat (nsec): min=1826, max=1237.7k, avg=42455.92, stdev=144708.44 00:12:19.260 clat (usec): min=3440, max=18589, avg=5579.54, stdev=1711.30 00:12:19.260 lat (usec): min=3865, max=18601, avg=5621.99, stdev=1721.89 00:12:19.260 clat percentiles (usec): 00:12:19.260 | 1.00th=[ 4490], 5.00th=[ 4883], 10.00th=[ 4948], 20.00th=[ 5080], 00:12:19.260 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5342], 00:12:19.260 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5932], 00:12:19.260 | 99.00th=[15795], 99.50th=[16188], 99.90th=[17433], 99.95th=[17695], 00:12:19.260 | 99.99th=[18482] 00:12:19.260 bw ( KiB/s): min=40136, max=47768, per=50.45%, avg=43952.00, stdev=5396.64, samples=2 00:12:19.260 iops : min=10034, max=11942, avg=10988.00, stdev=1349.16, samples=2 00:12:19.260 lat (msec) : 4=0.02%, 10=96.17%, 20=3.81% 00:12:19.260 cpu : usr=4.89%, sys=8.77%, ctx=1835, majf=0, minf=2 00:12:19.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:12:19.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.260 issued rwts: total=10752,11115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.260 job2: (groupid=0, jobs=1): err= 0: pid=359009: Wed Nov 6 08:48:41 2024 00:12:19.260 read: IOPS=3276, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1004msec) 00:12:19.260 slat (nsec): min=1354, max=1779.8k, avg=148103.16, stdev=327684.18 00:12:19.260 clat (usec): min=3330, max=20521, avg=18852.13, stdev=1920.67 00:12:19.260 lat (usec): min=3795, max=20562, avg=19000.24, stdev=1900.02 00:12:19.260 clat percentiles (usec): 00:12:19.260 | 1.00th=[ 7373], 5.00th=[16319], 10.00th=[17695], 20.00th=[18744], 00:12:19.260 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:12:19.260 | 70.00th=[19530], 80.00th=[19530], 90.00th=[19792], 95.00th=[19792], 00:12:19.260 | 99.00th=[20317], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:12:19.260 | 99.99th=[20579] 00:12:19.260 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:12:19.260 slat (nsec): min=1906, max=2454.5k, avg=137695.90, stdev=304836.58 00:12:19.260 clat (usec): min=14388, max=19499, avg=18023.16, stdev=843.45 00:12:19.260 lat (usec): min=14441, max=19959, avg=18160.85, stdev=797.01 00:12:19.260 clat percentiles (usec): 00:12:19.260 | 1.00th=[15664], 5.00th=[16188], 10.00th=[16581], 20.00th=[17433], 00:12:19.260 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:12:19.260 | 70.00th=[18482], 80.00th=[18744], 90.00th=[18744], 95.00th=[19006], 00:12:19.260 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:12:19.260 | 99.99th=[19530] 00:12:19.260 bw ( KiB/s): min=13944, max=14728, per=16.46%, avg=14336.00, stdev=554.37, samples=2 00:12:19.260 iops : min= 3486, max= 3682, avg=3584.00, stdev=138.59, samples=2 00:12:19.260 lat (msec) : 4=0.07%, 10=0.68%, 20=97.73%, 50=1.51% 00:12:19.260 cpu : usr=3.39%, sys=3.69%, ctx=1844, majf=0, minf=1 00:12:19.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:19.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.260 issued rwts: total=3290,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.260 job3: (groupid=0, jobs=1): err= 0: pid=359010: Wed Nov 6 08:48:41 2024 00:12:19.260 read: IOPS=3313, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:12:19.260 slat (nsec): min=1395, max=997897, avg=147632.93, stdev=294765.33 00:12:19.260 clat (usec): min=3385, max=20526, avg=18825.07, stdev=1890.82 00:12:19.260 lat (usec): min=3856, max=21184, avg=18972.70, stdev=1876.81 00:12:19.260 clat percentiles (usec): 00:12:19.260 | 1.00th=[ 7963], 5.00th=[15795], 10.00th=[17695], 20.00th=[18744], 00:12:19.260 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:12:19.260 | 70.00th=[19530], 80.00th=[19530], 90.00th=[19792], 95.00th=[19792], 00:12:19.260 | 99.00th=[20317], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:12:19.260 | 99.99th=[20579] 00:12:19.260 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:12:19.260 slat (nsec): min=1915, max=1441.9k, avg=137271.31, stdev=273798.13 00:12:19.260 clat (usec): min=14298, max=19694, avg=17877.83, stdev=1112.03 00:12:19.260 lat (usec): min=14339, max=19698, avg=18015.10, stdev=1089.94 00:12:19.260 clat percentiles (usec): 00:12:19.260 | 1.00th=[14877], 5.00th=[15139], 10.00th=[15664], 20.00th=[17695], 00:12:19.260 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:12:19.260 | 70.00th=[18482], 80.00th=[18482], 90.00th=[18744], 95.00th=[19006], 00:12:19.260 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:12:19.260 | 99.99th=[19792] 00:12:19.260 bw ( KiB/s): min=14016, max=14656, per=16.46%, avg=14336.00, stdev=452.55, samples=2 00:12:19.260 iops : min= 3504, max= 3664, avg=3584.00, stdev=113.14, samples=2 00:12:19.260 lat (msec) : 4=0.04%, 10=0.69%, 20=97.76%, 50=1.50% 00:12:19.260 cpu : usr=2.59%, sys=3.99%, ctx=1832, majf=0, minf=1 00:12:19.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:19.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.260 issued rwts: total=3327,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.260 00:12:19.260 Run status group 0 (all jobs): 00:12:19.260 READ: bw=80.6MiB/s (84.5MB/s), 12.8MiB/s-41.8MiB/s (13.4MB/s-43.9MB/s), io=80.9MiB (84.8MB), run=1004-1004msec 00:12:19.260 WRITE: bw=85.1MiB/s (89.2MB/s), 13.9MiB/s-43.2MiB/s (14.6MB/s-45.3MB/s), io=85.4MiB (89.6MB), run=1004-1004msec 00:12:19.260 00:12:19.260 Disk stats (read/write): 00:12:19.260 nvme0n1: ios=2609/3066, merge=0/0, ticks=12425/13942, in_queue=26367, util=84.07% 00:12:19.260 nvme0n2: ios=9662/9728, merge=0/0, ticks=16372/15523, in_queue=31895, util=85.10% 00:12:19.260 nvme0n3: ios=2560/3058, merge=0/0, ticks=12414/13926, in_queue=26340, util=88.44% 00:12:19.260 nvme0n4: ios=2560/3069, merge=0/0, ticks=12433/13955, in_queue=26388, util=89.48% 00:12:19.260 08:48:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:19.260 08:48:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=359236 00:12:19.260 08:48:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:19.260 08:48:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:19.260 [global] 00:12:19.260 thread=1 00:12:19.260 invalidate=1 00:12:19.260 rw=read 00:12:19.260 time_based=1 00:12:19.260 runtime=10 00:12:19.260 ioengine=libaio 00:12:19.260 direct=1 00:12:19.260 bs=4096 00:12:19.260 iodepth=1 00:12:19.260 norandommap=1 00:12:19.260 numjobs=1 00:12:19.260 00:12:19.260 [job0] 00:12:19.260 filename=/dev/nvme0n1 00:12:19.260 [job1] 00:12:19.260 filename=/dev/nvme0n2 00:12:19.260 [job2] 00:12:19.260 filename=/dev/nvme0n3 00:12:19.260 [job3] 00:12:19.260 filename=/dev/nvme0n4 00:12:19.260 Could not set queue depth (nvme0n1) 00:12:19.260 Could not set queue depth (nvme0n2) 00:12:19.260 Could not set queue depth (nvme0n3) 00:12:19.260 Could not set queue depth (nvme0n4) 00:12:19.523 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.523 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.524 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.524 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.524 fio-3.35 00:12:19.524 Starting 4 threads 00:12:22.056 08:48:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:22.315 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=74203136, buflen=4096 00:12:22.315 fio: pid=359379, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:22.315 08:48:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:22.574 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=81936384, buflen=4096 00:12:22.574 fio: pid=359378, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:22.574 08:48:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.574 08:48:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:22.574 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=30879744, buflen=4096 00:12:22.574 fio: pid=359376, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:22.833 08:48:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.833 08:48:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:22.833 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=37163008, buflen=4096 00:12:22.833 fio: pid=359377, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:23.093 08:48:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.093 08:48:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:23.093 00:12:23.093 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=359376: Wed Nov 6 08:48:45 2024 00:12:23.093 read: IOPS=7734, BW=30.2MiB/s (31.7MB/s)(93.4MiB/3093msec) 00:12:23.093 slat (usec): min=2, max=12864, avg= 8.92, stdev=133.22 00:12:23.093 clat (usec): min=49, max=21325, avg=118.42, stdev=142.01 00:12:23.093 lat (usec): min=56, max=21332, avg=127.34, stdev=194.37 00:12:23.093 clat percentiles (usec): 00:12:23.093 | 1.00th=[ 57], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 78], 00:12:23.093 | 30.00th=[ 82], 40.00th=[ 94], 50.00th=[ 126], 60.00th=[ 133], 00:12:23.093 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 167], 95.00th=[ 174], 00:12:23.093 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 210], 99.95th=[ 221], 00:12:23.093 | 99.99th=[ 235] 00:12:23.093 bw ( KiB/s): min=25800, max=37656, per=28.84%, avg=30124.80, stdev=5308.84, samples=5 00:12:23.093 iops : min= 6450, max= 9414, avg=7531.20, stdev=1327.21, samples=5 00:12:23.093 lat (usec) : 50=0.01%, 100=41.27%, 250=58.72%, 500=0.01% 00:12:23.093 lat (msec) : 50=0.01% 00:12:23.093 cpu : usr=2.49%, sys=8.73%, ctx=23928, majf=0, minf=1 00:12:23.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.093 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.093 issued rwts: total=23924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.093 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=359377: Wed Nov 6 08:48:45 2024 00:12:23.093 read: IOPS=7597, BW=29.7MiB/s (31.1MB/s)(99.4MiB/3351msec) 00:12:23.093 slat (usec): min=3, max=11829, avg= 9.46, stdev=137.65 00:12:23.093 clat (usec): min=47, max=21493, avg=120.20, stdev=234.26 00:12:23.093 lat (usec): min=54, max=21500, avg=129.66, stdev=271.61 00:12:23.093 clat percentiles (usec): 00:12:23.093 | 1.00th=[ 53], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 76], 00:12:23.093 | 30.00th=[ 84], 40.00th=[ 113], 50.00th=[ 126], 60.00th=[ 135], 00:12:23.093 | 70.00th=[ 145], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 174], 00:12:23.093 | 99.00th=[ 188], 99.50th=[ 202], 99.90th=[ 223], 99.95th=[ 231], 00:12:23.093 | 99.99th=[21365] 00:12:23.093 bw ( KiB/s): min=26304, max=33512, per=27.49%, avg=28712.50, stdev=2794.07, samples=6 00:12:23.093 iops : min= 6576, max= 8378, avg=7178.00, stdev=698.42, samples=6 00:12:23.093 lat (usec) : 50=0.06%, 100=37.04%, 250=62.88%, 500=0.01% 00:12:23.093 lat (msec) : 2=0.01%, 50=0.01% 00:12:23.093 cpu : usr=2.51%, sys=8.51%, ctx=25467, majf=0, minf=2 00:12:23.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.093 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.093 issued rwts: total=25458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.093 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=359378: Wed Nov 6 08:48:45 2024 00:12:23.093 read: IOPS=6914, BW=27.0MiB/s (28.3MB/s)(78.1MiB/2893msec) 00:12:23.093 slat (usec): min=3, max=15832, avg= 8.96, stdev=121.57 00:12:23.093 clat (usec): min=66, max=21345, avg=133.33, stdev=153.09 00:12:23.093 lat (usec): min=73, max=21353, avg=142.29, stdev=195.46 00:12:23.093 clat percentiles (usec): 00:12:23.093 | 1.00th=[ 78], 5.00th=[ 83], 10.00th=[ 88], 20.00th=[ 102], 00:12:23.093 | 30.00th=[ 122], 40.00th=[ 127], 50.00th=[ 133], 60.00th=[ 143], 00:12:23.093 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 174], 00:12:23.093 | 99.00th=[ 204], 99.50th=[ 219], 99.90th=[ 233], 99.95th=[ 237], 00:12:23.093 | 99.99th=[ 1004] 00:12:23.093 bw ( KiB/s): min=25720, max=31432, per=26.90%, avg=28091.20, stdev=2481.37, samples=5 00:12:23.093 iops : min= 6430, max= 7858, avg=7022.80, stdev=620.34, samples=5 00:12:23.093 lat (usec) : 100=19.20%, 250=80.78%, 500=0.01% 00:12:23.093 lat (msec) : 2=0.01%, 50=0.01% 00:12:23.093 cpu : usr=1.87%, sys=8.54%, ctx=20007, majf=0, minf=2 00:12:23.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.093 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.093 issued rwts: total=20005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.093 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=359379: Wed Nov 6 08:48:45 2024 00:12:23.093 read: IOPS=6719, BW=26.2MiB/s (27.5MB/s)(70.8MiB/2696msec) 00:12:23.093 slat (nsec): min=5570, max=41160, avg=7783.76, stdev=1598.61 00:12:23.093 clat (usec): min=71, max=246, avg=138.68, stdev=25.86 00:12:23.093 lat (usec): min=79, max=253, avg=146.46, stdev=25.90 00:12:23.093 clat percentiles (usec): 00:12:23.093 | 1.00th=[ 82], 5.00th=[ 90], 10.00th=[ 97], 20.00th=[ 123], 00:12:23.093 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 143], 60.00th=[ 147], 00:12:23.093 | 70.00th=[ 151], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 178], 00:12:23.093 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 223], 99.95th=[ 227], 00:12:23.093 | 99.99th=[ 241] 00:12:23.093 bw ( KiB/s): min=25776, max=29640, per=26.06%, avg=27214.40, stdev=1584.44, samples=5 00:12:23.093 iops : min= 6444, max= 7410, avg=6803.60, stdev=396.11, samples=5 00:12:23.093 lat (usec) : 100=11.25%, 250=88.74% 00:12:23.093 cpu : usr=1.82%, sys=8.46%, ctx=18118, majf=0, minf=2 00:12:23.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.093 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.093 issued rwts: total=18117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.093 00:12:23.093 Run status group 0 (all jobs): 00:12:23.093 READ: bw=102MiB/s (107MB/s), 26.2MiB/s-30.2MiB/s (27.5MB/s-31.7MB/s), io=342MiB (358MB), run=2696-3351msec 00:12:23.093 00:12:23.093 Disk stats (read/write): 00:12:23.093 nvme0n1: ios=21404/0, merge=0/0, ticks=2500/0, in_queue=2500, util=94.53% 00:12:23.093 nvme0n2: ios=25458/0, merge=0/0, ticks=2894/0, in_queue=2894, util=94.57% 00:12:23.093 nvme0n3: ios=19818/0, merge=0/0, ticks=2529/0, in_queue=2529, util=95.73% 00:12:23.093 nvme0n4: ios=17617/0, merge=0/0, ticks=2291/0, in_queue=2291, util=96.44% 00:12:23.093 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.093 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:23.352 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.352 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:23.611 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.611 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:23.871 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.871 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:24.130 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:24.130 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 359236 00:12:24.130 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:24.130 08:48:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:25.067 nvmf hotplug test: fio failed as expected 00:12:25.067 08:48:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:25.067 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:25.067 rmmod nvme_rdma 00:12:25.067 rmmod nvme_fabrics 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 356354 ']' 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 356354 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 356354 ']' 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 356354 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.326 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 356354 00:12:25.327 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.327 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.327 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 356354' 00:12:25.327 killing process with pid 356354 00:12:25.327 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 356354 00:12:25.327 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 356354 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:25.586 00:12:25.586 real 0m25.535s 00:12:25.586 user 1m52.628s 00:12:25.586 sys 0m8.562s 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.586 ************************************ 00:12:25.586 END TEST nvmf_fio_target 00:12:25.586 ************************************ 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:25.586 ************************************ 00:12:25.586 START TEST nvmf_bdevio 00:12:25.586 ************************************ 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:25.586 * Looking for test storage... 00:12:25.586 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lcov --version 00:12:25.586 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:25.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.846 --rc genhtml_branch_coverage=1 00:12:25.846 --rc genhtml_function_coverage=1 00:12:25.846 --rc genhtml_legend=1 00:12:25.846 --rc geninfo_all_blocks=1 00:12:25.846 --rc geninfo_unexecuted_blocks=1 00:12:25.846 00:12:25.846 ' 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:25.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.846 --rc genhtml_branch_coverage=1 00:12:25.846 --rc genhtml_function_coverage=1 00:12:25.846 --rc genhtml_legend=1 00:12:25.846 --rc geninfo_all_blocks=1 00:12:25.846 --rc geninfo_unexecuted_blocks=1 00:12:25.846 00:12:25.846 ' 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:25.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.846 --rc genhtml_branch_coverage=1 00:12:25.846 --rc genhtml_function_coverage=1 00:12:25.846 --rc genhtml_legend=1 00:12:25.846 --rc geninfo_all_blocks=1 00:12:25.846 --rc geninfo_unexecuted_blocks=1 00:12:25.846 00:12:25.846 ' 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:25.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.846 --rc genhtml_branch_coverage=1 00:12:25.846 --rc genhtml_function_coverage=1 00:12:25.846 --rc genhtml_legend=1 00:12:25.846 --rc geninfo_all_blocks=1 00:12:25.846 --rc geninfo_unexecuted_blocks=1 00:12:25.846 00:12:25.846 ' 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.846 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.847 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.847 08:48:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:32.432 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:32.432 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:32.432 Found net devices under 0000:da:00.0: mlx_0_0 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:32.432 Found net devices under 0000:da:00.1: mlx_0_1 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # rdma_device_init 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:32.432 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:32.433 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.433 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:12:32.433 altname enp218s0f0np0 00:12:32.433 altname ens818f0np0 00:12:32.433 inet 192.168.100.8/24 scope global mlx_0_0 00:12:32.433 valid_lft forever preferred_lft forever 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:32.433 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.433 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:12:32.433 altname enp218s0f1np1 00:12:32.433 altname ens818f1np1 00:12:32.433 inet 192.168.100.9/24 scope global mlx_0_1 00:12:32.433 valid_lft forever preferred_lft forever 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:32.433 192.168.100.9' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:32.433 192.168.100.9' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # head -n 1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:32.433 192.168.100.9' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # tail -n +2 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # head -n 1 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=363622 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 363622 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 363622 ']' 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.433 [2024-11-06 08:48:54.582315] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:12:32.433 [2024-11-06 08:48:54.582366] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.433 [2024-11-06 08:48:54.659380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.433 [2024-11-06 08:48:54.702048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.433 [2024-11-06 08:48:54.702081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.433 [2024-11-06 08:48:54.702089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.433 [2024-11-06 08:48:54.702095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.433 [2024-11-06 08:48:54.702100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.433 [2024-11-06 08:48:54.703579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:32.433 [2024-11-06 08:48:54.703685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:32.433 [2024-11-06 08:48:54.703768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.433 [2024-11-06 08:48:54.703769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.433 08:48:54 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.433 [2024-11-06 08:48:54.870520] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ae96a0/0x1aedb90) succeed. 00:12:32.433 [2024-11-06 08:48:54.879841] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aead30/0x1b2f230) succeed. 00:12:32.433 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.434 Malloc0 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.434 [2024-11-06 08:48:55.060972] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:32.434 { 00:12:32.434 "params": { 00:12:32.434 "name": "Nvme$subsystem", 00:12:32.434 "trtype": "$TEST_TRANSPORT", 00:12:32.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.434 "adrfam": "ipv4", 00:12:32.434 "trsvcid": "$NVMF_PORT", 00:12:32.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.434 "hdgst": ${hdgst:-false}, 00:12:32.434 "ddgst": ${ddgst:-false} 00:12:32.434 }, 00:12:32.434 "method": "bdev_nvme_attach_controller" 00:12:32.434 } 00:12:32.434 EOF 00:12:32.434 )") 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:12:32.434 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:32.434 "params": { 00:12:32.434 "name": "Nvme1", 00:12:32.434 "trtype": "rdma", 00:12:32.434 "traddr": "192.168.100.8", 00:12:32.434 "adrfam": "ipv4", 00:12:32.434 "trsvcid": "4420", 00:12:32.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.434 "hdgst": false, 00:12:32.434 "ddgst": false 00:12:32.434 }, 00:12:32.434 "method": "bdev_nvme_attach_controller" 00:12:32.434 }' 00:12:32.434 [2024-11-06 08:48:55.112140] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:12:32.434 [2024-11-06 08:48:55.112182] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363656 ] 00:12:32.434 [2024-11-06 08:48:55.185242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.434 [2024-11-06 08:48:55.229362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.434 [2024-11-06 08:48:55.229469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.434 [2024-11-06 08:48:55.229469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.434 I/O targets: 00:12:32.434 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:32.434 00:12:32.434 00:12:32.434 CUnit - A unit testing framework for C - Version 2.1-3 00:12:32.434 http://cunit.sourceforge.net/ 00:12:32.434 00:12:32.434 00:12:32.434 Suite: bdevio tests on: Nvme1n1 00:12:32.434 Test: blockdev write read block ...passed 00:12:32.434 Test: blockdev write zeroes read block ...passed 00:12:32.434 Test: blockdev write zeroes read no split ...passed 00:12:32.434 Test: blockdev write zeroes read split ...passed 00:12:32.434 Test: blockdev write zeroes read split partial ...passed 00:12:32.434 Test: blockdev reset ...[2024-11-06 08:48:55.436271] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:32.693 [2024-11-06 08:48:55.458869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:12:32.693 [2024-11-06 08:48:55.485849] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:32.693 passed 00:12:32.693 Test: blockdev write read 8 blocks ...passed 00:12:32.693 Test: blockdev write read size > 128k ...passed 00:12:32.693 Test: blockdev write read invalid size ...passed 00:12:32.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:32.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:32.693 Test: blockdev write read max offset ...passed 00:12:32.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:32.693 Test: blockdev writev readv 8 blocks ...passed 00:12:32.693 Test: blockdev writev readv 30 x 1block ...passed 00:12:32.693 Test: blockdev writev readv block ...passed 00:12:32.693 Test: blockdev writev readv size > 128k ...passed 00:12:32.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:32.693 Test: blockdev comparev and writev ...[2024-11-06 08:48:55.489138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.693 [2024-11-06 08:48:55.489164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:32.693 [2024-11-06 08:48:55.489175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.693 [2024-11-06 08:48:55.489182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:32.693 [2024-11-06 08:48:55.489356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.693 [2024-11-06 08:48:55.489365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:32.693 [2024-11-06 08:48:55.489373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.694 [2024-11-06 08:48:55.489380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:32.694 [2024-11-06 08:48:55.489546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.694 [2024-11-06 08:48:55.489554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:32.694 [2024-11-06 08:48:55.489562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.694 [2024-11-06 08:48:55.489569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:32.694 [2024-11-06 08:48:55.489740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.694 [2024-11-06 08:48:55.489748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:32.694 [2024-11-06 08:48:55.489756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.694 [2024-11-06 08:48:55.489762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:32.694 passed 00:12:32.694 Test: blockdev nvme passthru rw ...passed 00:12:32.694 Test: blockdev nvme passthru vendor specific ...[2024-11-06 08:48:55.490044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:32.694 [2024-11-06 08:48:55.490054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:32.694 [2024-11-06 08:48:55.490094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:32.694 [2024-11-06 08:48:55.490102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:32.694 [2024-11-06 08:48:55.490146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:32.694 [2024-11-06 08:48:55.490153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:32.694 [2024-11-06 08:48:55.490193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:32.694 [2024-11-06 08:48:55.490200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:32.694 passed 00:12:32.694 Test: blockdev nvme admin passthru ...passed 00:12:32.694 Test: blockdev copy ...passed 00:12:32.694 00:12:32.694 Run Summary: Type Total Ran Passed Failed Inactive 00:12:32.694 suites 1 1 n/a 0 0 00:12:32.694 tests 23 23 23 0 0 00:12:32.694 asserts 152 152 152 0 n/a 00:12:32.694 00:12:32.694 Elapsed time = 0.173 seconds 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:32.694 rmmod nvme_rdma 00:12:32.694 rmmod nvme_fabrics 00:12:32.694 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 363622 ']' 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 363622 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 363622 ']' 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 363622 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 363622 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 363622' 00:12:32.953 killing process with pid 363622 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 363622 00:12:32.953 08:48:55 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 363622 00:12:33.212 08:48:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:33.212 08:48:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:33.212 00:12:33.212 real 0m7.585s 00:12:33.212 user 0m7.973s 00:12:33.212 sys 0m4.984s 00:12:33.212 08:48:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.212 08:48:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 ************************************ 00:12:33.212 END TEST nvmf_bdevio 00:12:33.212 ************************************ 00:12:33.212 08:48:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:33.212 00:12:33.212 real 3m54.290s 00:12:33.212 user 10m26.073s 00:12:33.212 sys 1m20.852s 00:12:33.212 08:48:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.212 08:48:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 ************************************ 00:12:33.212 END TEST nvmf_target_core 00:12:33.213 ************************************ 00:12:33.213 08:48:56 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:33.213 08:48:56 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:33.213 08:48:56 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.213 08:48:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:33.213 ************************************ 00:12:33.213 START TEST nvmf_target_extra 00:12:33.213 ************************************ 00:12:33.213 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:33.213 * Looking for test storage... 00:12:33.213 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:12:33.213 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1689 -- # lcov --version 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:33.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.473 --rc genhtml_branch_coverage=1 00:12:33.473 --rc genhtml_function_coverage=1 00:12:33.473 --rc genhtml_legend=1 00:12:33.473 --rc geninfo_all_blocks=1 00:12:33.473 --rc geninfo_unexecuted_blocks=1 00:12:33.473 00:12:33.473 ' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:33.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.473 --rc genhtml_branch_coverage=1 00:12:33.473 --rc genhtml_function_coverage=1 00:12:33.473 --rc genhtml_legend=1 00:12:33.473 --rc geninfo_all_blocks=1 00:12:33.473 --rc geninfo_unexecuted_blocks=1 00:12:33.473 00:12:33.473 ' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:33.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.473 --rc genhtml_branch_coverage=1 00:12:33.473 --rc genhtml_function_coverage=1 00:12:33.473 --rc genhtml_legend=1 00:12:33.473 --rc geninfo_all_blocks=1 00:12:33.473 --rc geninfo_unexecuted_blocks=1 00:12:33.473 00:12:33.473 ' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:33.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.473 --rc genhtml_branch_coverage=1 00:12:33.473 --rc genhtml_function_coverage=1 00:12:33.473 --rc genhtml_legend=1 00:12:33.473 --rc geninfo_all_blocks=1 00:12:33.473 --rc geninfo_unexecuted_blocks=1 00:12:33.473 00:12:33.473 ' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.473 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.473 08:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.473 ************************************ 00:12:33.473 START TEST nvmf_example 00:12:33.474 ************************************ 00:12:33.474 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:33.474 * Looking for test storage... 00:12:33.474 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:33.474 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:33.474 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # lcov --version 00:12:33.474 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:33.734 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.735 --rc genhtml_branch_coverage=1 00:12:33.735 --rc genhtml_function_coverage=1 00:12:33.735 --rc genhtml_legend=1 00:12:33.735 --rc geninfo_all_blocks=1 00:12:33.735 --rc geninfo_unexecuted_blocks=1 00:12:33.735 00:12:33.735 ' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.735 --rc genhtml_branch_coverage=1 00:12:33.735 --rc genhtml_function_coverage=1 00:12:33.735 --rc genhtml_legend=1 00:12:33.735 --rc geninfo_all_blocks=1 00:12:33.735 --rc geninfo_unexecuted_blocks=1 00:12:33.735 00:12:33.735 ' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.735 --rc genhtml_branch_coverage=1 00:12:33.735 --rc genhtml_function_coverage=1 00:12:33.735 --rc genhtml_legend=1 00:12:33.735 --rc geninfo_all_blocks=1 00:12:33.735 --rc geninfo_unexecuted_blocks=1 00:12:33.735 00:12:33.735 ' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.735 --rc genhtml_branch_coverage=1 00:12:33.735 --rc genhtml_function_coverage=1 00:12:33.735 --rc genhtml_legend=1 00:12:33.735 --rc geninfo_all_blocks=1 00:12:33.735 --rc geninfo_unexecuted_blocks=1 00:12:33.735 00:12:33.735 ' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.735 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.735 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:33.736 08:48:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.308 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:40.309 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:40.309 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:40.309 Found net devices under 0000:da:00.0: mlx_0_0 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:40.309 Found net devices under 0000:da:00.1: mlx_0_1 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # rdma_device_init 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:40.309 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:40.310 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:40.310 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:12:40.310 altname enp218s0f0np0 00:12:40.310 altname ens818f0np0 00:12:40.310 inet 192.168.100.8/24 scope global mlx_0_0 00:12:40.310 valid_lft forever preferred_lft forever 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:40.310 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:40.310 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:12:40.310 altname enp218s0f1np1 00:12:40.310 altname ens818f1np1 00:12:40.310 inet 192.168.100.9/24 scope global mlx_0_1 00:12:40.310 valid_lft forever preferred_lft forever 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:40.310 192.168.100.9' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:40.310 192.168.100.9' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # head -n 1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:40.310 192.168.100.9' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # tail -n +2 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # head -n 1 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=367017 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 367017 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 367017 ']' 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.310 08:49:02 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.570 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.829 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.829 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:40.829 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.829 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.829 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.829 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:40.829 08:49:03 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:53.046 Initializing NVMe Controllers 00:12:53.046 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:53.046 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:53.046 Initialization complete. Launching workers. 00:12:53.046 ======================================================== 00:12:53.046 Latency(us) 00:12:53.046 Device Information : IOPS MiB/s Average min max 00:12:53.046 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24498.78 95.70 2610.65 647.00 12122.01 00:12:53.046 ======================================================== 00:12:53.046 Total : 24498.78 95.70 2610.65 647.00 12122.01 00:12:53.046 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:53.046 rmmod nvme_rdma 00:12:53.046 rmmod nvme_fabrics 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 367017 ']' 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 367017 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 367017 ']' 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 367017 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 367017 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 367017' 00:12:53.046 killing process with pid 367017 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 367017 00:12:53.046 08:49:14 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 367017 00:12:53.046 nvmf threads initialize successfully 00:12:53.046 bdev subsystem init successfully 00:12:53.046 created a nvmf target service 00:12:53.046 create targets's poll groups done 00:12:53.046 all subsystems of target started 00:12:53.046 nvmf target is running 00:12:53.046 all subsystems of target stopped 00:12:53.046 destroy targets's poll groups done 00:12:53.046 destroyed the nvmf target service 00:12:53.046 bdev subsystem finish successfully 00:12:53.046 nvmf threads destroy successfully 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.046 00:12:53.046 real 0m18.851s 00:12:53.046 user 0m51.976s 00:12:53.046 sys 0m4.904s 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.046 ************************************ 00:12:53.046 END TEST nvmf_example 00:12:53.046 ************************************ 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.046 ************************************ 00:12:53.046 START TEST nvmf_filesystem 00:12:53.046 ************************************ 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:53.046 * Looking for test storage... 00:12:53.046 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lcov --version 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:53.046 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:53.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.047 --rc genhtml_branch_coverage=1 00:12:53.047 --rc genhtml_function_coverage=1 00:12:53.047 --rc genhtml_legend=1 00:12:53.047 --rc geninfo_all_blocks=1 00:12:53.047 --rc geninfo_unexecuted_blocks=1 00:12:53.047 00:12:53.047 ' 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:53.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.047 --rc genhtml_branch_coverage=1 00:12:53.047 --rc genhtml_function_coverage=1 00:12:53.047 --rc genhtml_legend=1 00:12:53.047 --rc geninfo_all_blocks=1 00:12:53.047 --rc geninfo_unexecuted_blocks=1 00:12:53.047 00:12:53.047 ' 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:53.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.047 --rc genhtml_branch_coverage=1 00:12:53.047 --rc genhtml_function_coverage=1 00:12:53.047 --rc genhtml_legend=1 00:12:53.047 --rc geninfo_all_blocks=1 00:12:53.047 --rc geninfo_unexecuted_blocks=1 00:12:53.047 00:12:53.047 ' 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:53.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.047 --rc genhtml_branch_coverage=1 00:12:53.047 --rc genhtml_function_coverage=1 00:12:53.047 --rc genhtml_legend=1 00:12:53.047 --rc geninfo_all_blocks=1 00:12:53.047 --rc geninfo_unexecuted_blocks=1 00:12:53.047 00:12:53.047 ' 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:53.047 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:53.048 #define SPDK_CONFIG_H 00:12:53.048 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:53.048 #define SPDK_CONFIG_APPS 1 00:12:53.048 #define SPDK_CONFIG_ARCH native 00:12:53.048 #undef SPDK_CONFIG_ASAN 00:12:53.048 #undef SPDK_CONFIG_AVAHI 00:12:53.048 #undef SPDK_CONFIG_CET 00:12:53.048 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:53.048 #define SPDK_CONFIG_COVERAGE 1 00:12:53.048 #define SPDK_CONFIG_CROSS_PREFIX 00:12:53.048 #undef SPDK_CONFIG_CRYPTO 00:12:53.048 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:53.048 #undef SPDK_CONFIG_CUSTOMOCF 00:12:53.048 #undef SPDK_CONFIG_DAOS 00:12:53.048 #define SPDK_CONFIG_DAOS_DIR 00:12:53.048 #define SPDK_CONFIG_DEBUG 1 00:12:53.048 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:53.048 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:53.048 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:53.048 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:53.048 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:53.048 #undef SPDK_CONFIG_DPDK_UADK 00:12:53.048 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:53.048 #define SPDK_CONFIG_EXAMPLES 1 00:12:53.048 #undef SPDK_CONFIG_FC 00:12:53.048 #define SPDK_CONFIG_FC_PATH 00:12:53.048 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:53.048 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:53.048 #define SPDK_CONFIG_FSDEV 1 00:12:53.048 #undef SPDK_CONFIG_FUSE 00:12:53.048 #undef SPDK_CONFIG_FUZZER 00:12:53.048 #define SPDK_CONFIG_FUZZER_LIB 00:12:53.048 #undef SPDK_CONFIG_GOLANG 00:12:53.048 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:53.048 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:53.048 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:53.048 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:53.048 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:53.048 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:53.048 #undef SPDK_CONFIG_HAVE_LZ4 00:12:53.048 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:53.048 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:53.048 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:53.048 #define SPDK_CONFIG_IDXD 1 00:12:53.048 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:53.048 #undef SPDK_CONFIG_IPSEC_MB 00:12:53.048 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:53.048 #define SPDK_CONFIG_ISAL 1 00:12:53.048 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:53.048 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:53.048 #define SPDK_CONFIG_LIBDIR 00:12:53.048 #undef SPDK_CONFIG_LTO 00:12:53.048 #define SPDK_CONFIG_MAX_LCORES 128 00:12:53.048 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:53.048 #define SPDK_CONFIG_NVME_CUSE 1 00:12:53.048 #undef SPDK_CONFIG_OCF 00:12:53.048 #define SPDK_CONFIG_OCF_PATH 00:12:53.048 #define SPDK_CONFIG_OPENSSL_PATH 00:12:53.048 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:53.048 #define SPDK_CONFIG_PGO_DIR 00:12:53.048 #undef SPDK_CONFIG_PGO_USE 00:12:53.048 #define SPDK_CONFIG_PREFIX /usr/local 00:12:53.048 #undef SPDK_CONFIG_RAID5F 00:12:53.048 #undef SPDK_CONFIG_RBD 00:12:53.048 #define SPDK_CONFIG_RDMA 1 00:12:53.048 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:53.048 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:53.048 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:53.048 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:53.048 #define SPDK_CONFIG_SHARED 1 00:12:53.048 #undef SPDK_CONFIG_SMA 00:12:53.048 #define SPDK_CONFIG_TESTS 1 00:12:53.048 #undef SPDK_CONFIG_TSAN 00:12:53.048 #define SPDK_CONFIG_UBLK 1 00:12:53.048 #define SPDK_CONFIG_UBSAN 1 00:12:53.048 #undef SPDK_CONFIG_UNIT_TESTS 00:12:53.048 #undef SPDK_CONFIG_URING 00:12:53.048 #define SPDK_CONFIG_URING_PATH 00:12:53.048 #undef SPDK_CONFIG_URING_ZNS 00:12:53.048 #undef SPDK_CONFIG_USDT 00:12:53.048 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:53.048 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:53.048 #undef SPDK_CONFIG_VFIO_USER 00:12:53.048 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:53.048 #define SPDK_CONFIG_VHOST 1 00:12:53.048 #define SPDK_CONFIG_VIRTIO 1 00:12:53.048 #undef SPDK_CONFIG_VTUNE 00:12:53.048 #define SPDK_CONFIG_VTUNE_DIR 00:12:53.048 #define SPDK_CONFIG_WERROR 1 00:12:53.048 #define SPDK_CONFIG_WPDK_DIR 00:12:53.048 #undef SPDK_CONFIG_XNVME 00:12:53.048 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.048 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:53.049 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:53.050 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 369267 ]] 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 369267 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1674 -- # set_test_storage 2147483648 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.QkR5Lm 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QkR5Lm/tests/target /tmp/spdk.QkR5Lm 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:12:53.051 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=190383157248 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963973632 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5580816384 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97968525312 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981984768 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=13459456 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169744896 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192797184 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23052288 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981693952 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981988864 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=294912 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:12:53.052 * Looking for test storage... 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=190383157248 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=7795408896 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.052 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set -o errtrace 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1677 -- # shopt -s extdebug 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # true 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # xtrace_fd 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lcov --version 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:53.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.052 --rc genhtml_branch_coverage=1 00:12:53.052 --rc genhtml_function_coverage=1 00:12:53.052 --rc genhtml_legend=1 00:12:53.052 --rc geninfo_all_blocks=1 00:12:53.052 --rc geninfo_unexecuted_blocks=1 00:12:53.052 00:12:53.052 ' 00:12:53.052 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:53.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.052 --rc genhtml_branch_coverage=1 00:12:53.053 --rc genhtml_function_coverage=1 00:12:53.053 --rc genhtml_legend=1 00:12:53.053 --rc geninfo_all_blocks=1 00:12:53.053 --rc geninfo_unexecuted_blocks=1 00:12:53.053 00:12:53.053 ' 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:53.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.053 --rc genhtml_branch_coverage=1 00:12:53.053 --rc genhtml_function_coverage=1 00:12:53.053 --rc genhtml_legend=1 00:12:53.053 --rc geninfo_all_blocks=1 00:12:53.053 --rc geninfo_unexecuted_blocks=1 00:12:53.053 00:12:53.053 ' 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:53.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.053 --rc genhtml_branch_coverage=1 00:12:53.053 --rc genhtml_function_coverage=1 00:12:53.053 --rc genhtml_legend=1 00:12:53.053 --rc geninfo_all_blocks=1 00:12:53.053 --rc geninfo_unexecuted_blocks=1 00:12:53.053 00:12:53.053 ' 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:53.053 08:49:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:59.626 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:59.627 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:59.627 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:59.627 Found net devices under 0000:da:00.0: mlx_0_0 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:59.627 Found net devices under 0000:da:00.1: mlx_0_1 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # rdma_device_init 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:59.627 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.627 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:12:59.627 altname enp218s0f0np0 00:12:59.627 altname ens818f0np0 00:12:59.627 inet 192.168.100.8/24 scope global mlx_0_0 00:12:59.627 valid_lft forever preferred_lft forever 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:59.627 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.627 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:12:59.627 altname enp218s0f1np1 00:12:59.627 altname ens818f1np1 00:12:59.627 inet 192.168.100.9/24 scope global mlx_0_1 00:12:59.627 valid_lft forever preferred_lft forever 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:59.627 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:59.628 192.168.100.9' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:59.628 192.168.100.9' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # head -n 1 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:59.628 192.168.100.9' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # tail -n +2 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # head -n 1 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.628 ************************************ 00:12:59.628 START TEST nvmf_filesystem_no_in_capsule 00:12:59.628 ************************************ 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=372453 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 372453 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 372453 ']' 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.628 [2024-11-06 08:49:21.725070] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:12:59.628 [2024-11-06 08:49:21.725115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.628 [2024-11-06 08:49:21.799741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.628 [2024-11-06 08:49:21.839411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.628 [2024-11-06 08:49:21.839446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.628 [2024-11-06 08:49:21.839454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.628 [2024-11-06 08:49:21.839459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.628 [2024-11-06 08:49:21.839464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.628 [2024-11-06 08:49:21.840974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.628 [2024-11-06 08:49:21.841077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.628 [2024-11-06 08:49:21.841190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.628 [2024-11-06 08:49:21.841192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.628 08:49:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.628 [2024-11-06 08:49:21.987216] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:59.628 [2024-11-06 08:49:22.007398] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e08da0/0x1e0d290) succeed. 00:12:59.628 [2024-11-06 08:49:22.016436] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e0a430/0x1e4e930) succeed. 00:12:59.628 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.628 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:59.628 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.628 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.628 Malloc1 00:12:59.628 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.628 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:59.628 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.628 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.628 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.629 [2024-11-06 08:49:22.270437] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:59.629 { 00:12:59.629 "name": "Malloc1", 00:12:59.629 "aliases": [ 00:12:59.629 "6b53dcd8-6c38-47fc-9481-badc49cc59df" 00:12:59.629 ], 00:12:59.629 "product_name": "Malloc disk", 00:12:59.629 "block_size": 512, 00:12:59.629 "num_blocks": 1048576, 00:12:59.629 "uuid": "6b53dcd8-6c38-47fc-9481-badc49cc59df", 00:12:59.629 "assigned_rate_limits": { 00:12:59.629 "rw_ios_per_sec": 0, 00:12:59.629 "rw_mbytes_per_sec": 0, 00:12:59.629 "r_mbytes_per_sec": 0, 00:12:59.629 "w_mbytes_per_sec": 0 00:12:59.629 }, 00:12:59.629 "claimed": true, 00:12:59.629 "claim_type": "exclusive_write", 00:12:59.629 "zoned": false, 00:12:59.629 "supported_io_types": { 00:12:59.629 "read": true, 00:12:59.629 "write": true, 00:12:59.629 "unmap": true, 00:12:59.629 "flush": true, 00:12:59.629 "reset": true, 00:12:59.629 "nvme_admin": false, 00:12:59.629 "nvme_io": false, 00:12:59.629 "nvme_io_md": false, 00:12:59.629 "write_zeroes": true, 00:12:59.629 "zcopy": true, 00:12:59.629 "get_zone_info": false, 00:12:59.629 "zone_management": false, 00:12:59.629 "zone_append": false, 00:12:59.629 "compare": false, 00:12:59.629 "compare_and_write": false, 00:12:59.629 "abort": true, 00:12:59.629 "seek_hole": false, 00:12:59.629 "seek_data": false, 00:12:59.629 "copy": true, 00:12:59.629 "nvme_iov_md": false 00:12:59.629 }, 00:12:59.629 "memory_domains": [ 00:12:59.629 { 00:12:59.629 "dma_device_id": "system", 00:12:59.629 "dma_device_type": 1 00:12:59.629 }, 00:12:59.629 { 00:12:59.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.629 "dma_device_type": 2 00:12:59.629 } 00:12:59.629 ], 00:12:59.629 "driver_specific": {} 00:12:59.629 } 00:12:59.629 ]' 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:59.629 08:49:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:00.564 08:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.564 08:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.564 08:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.564 08:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:00.564 08:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:02.466 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:02.726 08:49:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.663 ************************************ 00:13:03.663 START TEST filesystem_ext4 00:13:03.663 ************************************ 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:03.663 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:03.663 mke2fs 1.47.0 (5-Feb-2023) 00:13:03.922 Discarding device blocks: 0/522240 done 00:13:03.923 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:03.923 Filesystem UUID: efe40903-3dad-4e44-a93b-516cf047980b 00:13:03.923 Superblock backups stored on blocks: 00:13:03.923 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:03.923 00:13:03.923 Allocating group tables: 0/64 done 00:13:03.923 Writing inode tables: 0/64 done 00:13:03.923 Creating journal (8192 blocks): done 00:13:03.923 Writing superblocks and filesystem accounting information: 0/64 done 00:13:03.923 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 372453 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:03.923 00:13:03.923 real 0m0.222s 00:13:03.923 user 0m0.025s 00:13:03.923 sys 0m0.100s 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:03.923 ************************************ 00:13:03.923 END TEST filesystem_ext4 00:13:03.923 ************************************ 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.923 ************************************ 00:13:03.923 START TEST filesystem_btrfs 00:13:03.923 ************************************ 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:03.923 08:49:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:04.181 btrfs-progs v6.8.1 00:13:04.181 See https://btrfs.readthedocs.io for more information. 00:13:04.181 00:13:04.181 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:04.181 NOTE: several default settings have changed in version 5.15, please make sure 00:13:04.181 this does not affect your deployments: 00:13:04.181 - DUP for metadata (-m dup) 00:13:04.181 - enabled no-holes (-O no-holes) 00:13:04.181 - enabled free-space-tree (-R free-space-tree) 00:13:04.181 00:13:04.181 Label: (null) 00:13:04.181 UUID: 0fc15172-b651-41bb-8806-c4bebf5cbc9a 00:13:04.181 Node size: 16384 00:13:04.181 Sector size: 4096 (CPU page size: 4096) 00:13:04.181 Filesystem size: 510.00MiB 00:13:04.181 Block group profiles: 00:13:04.181 Data: single 8.00MiB 00:13:04.181 Metadata: DUP 32.00MiB 00:13:04.181 System: DUP 8.00MiB 00:13:04.181 SSD detected: yes 00:13:04.181 Zoned device: no 00:13:04.181 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:04.181 Checksum: crc32c 00:13:04.181 Number of devices: 1 00:13:04.181 Devices: 00:13:04.181 ID SIZE PATH 00:13:04.181 1 510.00MiB /dev/nvme0n1p1 00:13:04.181 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 372453 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:04.181 00:13:04.181 real 0m0.262s 00:13:04.181 user 0m0.034s 00:13:04.181 sys 0m0.134s 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.181 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:04.181 ************************************ 00:13:04.181 END TEST filesystem_btrfs 00:13:04.181 ************************************ 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.438 ************************************ 00:13:04.438 START TEST filesystem_xfs 00:13:04.438 ************************************ 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:04.438 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:04.439 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:04.439 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:04.439 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:04.439 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:04.439 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:04.439 = sectsz=512 attr=2, projid32bit=1 00:13:04.439 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:04.439 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:04.439 data = bsize=4096 blocks=130560, imaxpct=25 00:13:04.439 = sunit=0 swidth=0 blks 00:13:04.439 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:04.439 log =internal log bsize=4096 blocks=16384, version=2 00:13:04.439 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:04.439 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:04.439 Discarding blocks...Done. 00:13:04.439 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:04.439 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 372453 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:05.004 00:13:05.004 real 0m0.673s 00:13:05.004 user 0m0.019s 00:13:05.004 sys 0m0.116s 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:05.004 ************************************ 00:13:05.004 END TEST filesystem_xfs 00:13:05.004 ************************************ 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:05.004 08:49:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.940 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.941 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:05.941 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:05.941 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 372453 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 372453 ']' 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 372453 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.200 08:49:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 372453 00:13:06.200 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:06.200 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:06.200 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 372453' 00:13:06.200 killing process with pid 372453 00:13:06.200 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 372453 00:13:06.200 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 372453 00:13:06.459 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:06.459 00:13:06.459 real 0m7.739s 00:13:06.459 user 0m30.170s 00:13:06.459 sys 0m1.161s 00:13:06.459 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.459 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.459 ************************************ 00:13:06.459 END TEST nvmf_filesystem_no_in_capsule 00:13:06.459 ************************************ 00:13:06.459 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:06.459 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:06.459 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.459 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:06.731 ************************************ 00:13:06.731 START TEST nvmf_filesystem_in_capsule 00:13:06.731 ************************************ 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=373936 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 373936 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 373936 ']' 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.732 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.732 [2024-11-06 08:49:29.536321] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:13:06.732 [2024-11-06 08:49:29.536365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.732 [2024-11-06 08:49:29.611113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.732 [2024-11-06 08:49:29.650856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.732 [2024-11-06 08:49:29.650893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.732 [2024-11-06 08:49:29.650901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.732 [2024-11-06 08:49:29.650906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.732 [2024-11-06 08:49:29.650911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.732 [2024-11-06 08:49:29.652398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.732 [2024-11-06 08:49:29.652506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.732 [2024-11-06 08:49:29.652618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.732 [2024-11-06 08:49:29.652619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.999 [2024-11-06 08:49:29.823346] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15cfda0/0x15d4290) succeed. 00:13:06.999 [2024-11-06 08:49:29.832353] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15d1430/0x1615930) succeed. 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.999 08:49:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.273 Malloc1 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.273 [2024-11-06 08:49:30.112907] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:07.273 { 00:13:07.273 "name": "Malloc1", 00:13:07.273 "aliases": [ 00:13:07.273 "38b4d296-55c4-4548-a5fd-09cf5607f39c" 00:13:07.273 ], 00:13:07.273 "product_name": "Malloc disk", 00:13:07.273 "block_size": 512, 00:13:07.273 "num_blocks": 1048576, 00:13:07.273 "uuid": "38b4d296-55c4-4548-a5fd-09cf5607f39c", 00:13:07.273 "assigned_rate_limits": { 00:13:07.273 "rw_ios_per_sec": 0, 00:13:07.273 "rw_mbytes_per_sec": 0, 00:13:07.273 "r_mbytes_per_sec": 0, 00:13:07.273 "w_mbytes_per_sec": 0 00:13:07.273 }, 00:13:07.273 "claimed": true, 00:13:07.273 "claim_type": "exclusive_write", 00:13:07.273 "zoned": false, 00:13:07.273 "supported_io_types": { 00:13:07.273 "read": true, 00:13:07.273 "write": true, 00:13:07.273 "unmap": true, 00:13:07.273 "flush": true, 00:13:07.273 "reset": true, 00:13:07.273 "nvme_admin": false, 00:13:07.273 "nvme_io": false, 00:13:07.273 "nvme_io_md": false, 00:13:07.273 "write_zeroes": true, 00:13:07.273 "zcopy": true, 00:13:07.273 "get_zone_info": false, 00:13:07.273 "zone_management": false, 00:13:07.273 "zone_append": false, 00:13:07.273 "compare": false, 00:13:07.273 "compare_and_write": false, 00:13:07.273 "abort": true, 00:13:07.273 "seek_hole": false, 00:13:07.273 "seek_data": false, 00:13:07.273 "copy": true, 00:13:07.273 "nvme_iov_md": false 00:13:07.273 }, 00:13:07.273 "memory_domains": [ 00:13:07.273 { 00:13:07.273 "dma_device_id": "system", 00:13:07.273 "dma_device_type": 1 00:13:07.273 }, 00:13:07.273 { 00:13:07.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.273 "dma_device_type": 2 00:13:07.273 } 00:13:07.273 ], 00:13:07.273 "driver_specific": {} 00:13:07.273 } 00:13:07.273 ]' 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:07.273 08:49:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:08.292 08:49:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.292 08:49:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.292 08:49:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.292 08:49:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:08.293 08:49:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:10.278 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:10.537 08:49:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.508 ************************************ 00:13:11.508 START TEST filesystem_in_capsule_ext4 00:13:11.508 ************************************ 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:11.508 mke2fs 1.47.0 (5-Feb-2023) 00:13:11.508 Discarding device blocks: 0/522240 done 00:13:11.508 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:11.508 Filesystem UUID: d2d76291-5770-4be1-8635-800ea361b46b 00:13:11.508 Superblock backups stored on blocks: 00:13:11.508 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:11.508 00:13:11.508 Allocating group tables: 0/64 done 00:13:11.508 Writing inode tables: 0/64 done 00:13:11.508 Creating journal (8192 blocks): done 00:13:11.508 Writing superblocks and filesystem accounting information: 0/64 done 00:13:11.508 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:11.508 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 373936 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:11.767 00:13:11.767 real 0m0.184s 00:13:11.767 user 0m0.020s 00:13:11.767 sys 0m0.068s 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 ************************************ 00:13:11.767 END TEST filesystem_in_capsule_ext4 00:13:11.767 ************************************ 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 ************************************ 00:13:11.767 START TEST filesystem_in_capsule_btrfs 00:13:11.767 ************************************ 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:11.767 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:11.768 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:11.768 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:11.768 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:11.768 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:11.768 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:11.768 btrfs-progs v6.8.1 00:13:11.768 See https://btrfs.readthedocs.io for more information. 00:13:11.768 00:13:11.768 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:11.768 NOTE: several default settings have changed in version 5.15, please make sure 00:13:11.768 this does not affect your deployments: 00:13:11.768 - DUP for metadata (-m dup) 00:13:11.768 - enabled no-holes (-O no-holes) 00:13:11.768 - enabled free-space-tree (-R free-space-tree) 00:13:11.768 00:13:11.768 Label: (null) 00:13:11.768 UUID: 87ec3f60-ba89-4339-9acf-5ae377941e28 00:13:11.768 Node size: 16384 00:13:11.768 Sector size: 4096 (CPU page size: 4096) 00:13:11.768 Filesystem size: 510.00MiB 00:13:11.768 Block group profiles: 00:13:11.768 Data: single 8.00MiB 00:13:11.768 Metadata: DUP 32.00MiB 00:13:11.768 System: DUP 8.00MiB 00:13:11.768 SSD detected: yes 00:13:11.768 Zoned device: no 00:13:11.768 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:11.768 Checksum: crc32c 00:13:11.768 Number of devices: 1 00:13:11.768 Devices: 00:13:11.768 ID SIZE PATH 00:13:11.768 1 510.00MiB /dev/nvme0n1p1 00:13:11.768 00:13:11.768 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:11.768 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 373936 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:12.027 00:13:12.027 real 0m0.231s 00:13:12.027 user 0m0.023s 00:13:12.027 sys 0m0.111s 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:12.027 ************************************ 00:13:12.027 END TEST filesystem_in_capsule_btrfs 00:13:12.027 ************************************ 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.027 ************************************ 00:13:12.027 START TEST filesystem_in_capsule_xfs 00:13:12.027 ************************************ 00:13:12.027 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:12.028 08:49:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:12.028 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:12.028 = sectsz=512 attr=2, projid32bit=1 00:13:12.028 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:12.028 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:12.028 data = bsize=4096 blocks=130560, imaxpct=25 00:13:12.028 = sunit=0 swidth=0 blks 00:13:12.028 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:12.028 log =internal log bsize=4096 blocks=16384, version=2 00:13:12.028 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:12.028 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:12.287 Discarding blocks...Done. 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 373936 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:12.287 00:13:12.287 real 0m0.203s 00:13:12.287 user 0m0.033s 00:13:12.287 sys 0m0.058s 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:12.287 ************************************ 00:13:12.287 END TEST filesystem_in_capsule_xfs 00:13:12.287 ************************************ 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:12.287 08:49:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 373936 00:13:13.224 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 373936 ']' 00:13:13.225 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 373936 00:13:13.225 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:13.225 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.225 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 373936 00:13:13.483 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:13.483 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:13.483 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 373936' 00:13:13.483 killing process with pid 373936 00:13:13.483 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 373936 00:13:13.483 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 373936 00:13:13.742 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:13.743 00:13:13.743 real 0m7.204s 00:13:13.743 user 0m27.991s 00:13:13.743 sys 0m1.042s 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:13.743 ************************************ 00:13:13.743 END TEST nvmf_filesystem_in_capsule 00:13:13.743 ************************************ 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.743 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:13.743 rmmod nvme_rdma 00:13:13.743 rmmod nvme_fabrics 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:13:14.002 00:13:14.002 real 0m21.482s 00:13:14.002 user 1m0.221s 00:13:14.002 sys 0m6.842s 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:14.002 ************************************ 00:13:14.002 END TEST nvmf_filesystem 00:13:14.002 ************************************ 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.002 ************************************ 00:13:14.002 START TEST nvmf_target_discovery 00:13:14.002 ************************************ 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:14.002 * Looking for test storage... 00:13:14.002 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # lcov --version 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.002 08:49:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:14.002 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:14.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.263 --rc genhtml_branch_coverage=1 00:13:14.263 --rc genhtml_function_coverage=1 00:13:14.263 --rc genhtml_legend=1 00:13:14.263 --rc geninfo_all_blocks=1 00:13:14.263 --rc geninfo_unexecuted_blocks=1 00:13:14.263 00:13:14.263 ' 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:14.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.263 --rc genhtml_branch_coverage=1 00:13:14.263 --rc genhtml_function_coverage=1 00:13:14.263 --rc genhtml_legend=1 00:13:14.263 --rc geninfo_all_blocks=1 00:13:14.263 --rc geninfo_unexecuted_blocks=1 00:13:14.263 00:13:14.263 ' 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:14.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.263 --rc genhtml_branch_coverage=1 00:13:14.263 --rc genhtml_function_coverage=1 00:13:14.263 --rc genhtml_legend=1 00:13:14.263 --rc geninfo_all_blocks=1 00:13:14.263 --rc geninfo_unexecuted_blocks=1 00:13:14.263 00:13:14.263 ' 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:14.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.263 --rc genhtml_branch_coverage=1 00:13:14.263 --rc genhtml_function_coverage=1 00:13:14.263 --rc genhtml_legend=1 00:13:14.263 --rc geninfo_all_blocks=1 00:13:14.263 --rc geninfo_unexecuted_blocks=1 00:13:14.263 00:13:14.263 ' 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.263 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.264 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.264 08:49:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:20.835 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:20.835 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:20.836 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:20.836 Found net devices under 0000:da:00.0: mlx_0_0 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:20.836 Found net devices under 0000:da:00.1: mlx_0_1 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # rdma_device_init 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@528 -- # allocate_nic_ips 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:20.836 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:20.836 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:13:20.836 altname enp218s0f0np0 00:13:20.836 altname ens818f0np0 00:13:20.836 inet 192.168.100.8/24 scope global mlx_0_0 00:13:20.836 valid_lft forever preferred_lft forever 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:20.836 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:20.836 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:13:20.836 altname enp218s0f1np1 00:13:20.836 altname ens818f1np1 00:13:20.836 inet 192.168.100.9/24 scope global mlx_0_1 00:13:20.836 valid_lft forever preferred_lft forever 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:20.836 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:13:20.837 192.168.100.9' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:13:20.837 192.168.100.9' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # head -n 1 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:13:20.837 192.168.100.9' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # tail -n +2 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # head -n 1 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=378480 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 378480 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 378480 ']' 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.837 08:49:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.837 [2024-11-06 08:49:42.988932] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:13:20.837 [2024-11-06 08:49:42.988982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.837 [2024-11-06 08:49:43.048288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.837 [2024-11-06 08:49:43.092024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.837 [2024-11-06 08:49:43.092060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.837 [2024-11-06 08:49:43.092067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.837 [2024-11-06 08:49:43.092076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.837 [2024-11-06 08:49:43.092081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.837 [2024-11-06 08:49:43.093694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.837 [2024-11-06 08:49:43.093800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.837 [2024-11-06 08:49:43.093906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.837 [2024-11-06 08:49:43.093906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.837 [2024-11-06 08:49:43.253352] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f0cda0/0x1f11290) succeed. 00:13:20.837 [2024-11-06 08:49:43.262352] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f0e430/0x1f52930) succeed. 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.837 Null1 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.837 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 [2024-11-06 08:49:43.440846] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 Null2 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 Null3 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 Null4 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.838 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:13:20.838 00:13:20.838 Discovery Log Number of Records 6, Generation counter 6 00:13:20.838 =====Discovery Log Entry 0====== 00:13:20.838 trtype: rdma 00:13:20.838 adrfam: ipv4 00:13:20.838 subtype: current discovery subsystem 00:13:20.838 treq: not required 00:13:20.838 portid: 0 00:13:20.838 trsvcid: 4420 00:13:20.838 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:20.838 traddr: 192.168.100.8 00:13:20.838 eflags: explicit discovery connections, duplicate discovery information 00:13:20.838 rdma_prtype: not specified 00:13:20.838 rdma_qptype: connected 00:13:20.838 rdma_cms: rdma-cm 00:13:20.838 rdma_pkey: 0x0000 00:13:20.838 =====Discovery Log Entry 1====== 00:13:20.838 trtype: rdma 00:13:20.838 adrfam: ipv4 00:13:20.838 subtype: nvme subsystem 00:13:20.838 treq: not required 00:13:20.838 portid: 0 00:13:20.838 trsvcid: 4420 00:13:20.838 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:20.838 traddr: 192.168.100.8 00:13:20.838 eflags: none 00:13:20.838 rdma_prtype: not specified 00:13:20.838 rdma_qptype: connected 00:13:20.838 rdma_cms: rdma-cm 00:13:20.838 rdma_pkey: 0x0000 00:13:20.838 =====Discovery Log Entry 2====== 00:13:20.838 trtype: rdma 00:13:20.838 adrfam: ipv4 00:13:20.838 subtype: nvme subsystem 00:13:20.838 treq: not required 00:13:20.838 portid: 0 00:13:20.838 trsvcid: 4420 00:13:20.838 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:20.838 traddr: 192.168.100.8 00:13:20.838 eflags: none 00:13:20.838 rdma_prtype: not specified 00:13:20.838 rdma_qptype: connected 00:13:20.838 rdma_cms: rdma-cm 00:13:20.838 rdma_pkey: 0x0000 00:13:20.838 =====Discovery Log Entry 3====== 00:13:20.838 trtype: rdma 00:13:20.838 adrfam: ipv4 00:13:20.838 subtype: nvme subsystem 00:13:20.838 treq: not required 00:13:20.838 portid: 0 00:13:20.838 trsvcid: 4420 00:13:20.838 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:20.838 traddr: 192.168.100.8 00:13:20.838 eflags: none 00:13:20.839 rdma_prtype: not specified 00:13:20.839 rdma_qptype: connected 00:13:20.839 rdma_cms: rdma-cm 00:13:20.839 rdma_pkey: 0x0000 00:13:20.839 =====Discovery Log Entry 4====== 00:13:20.839 trtype: rdma 00:13:20.839 adrfam: ipv4 00:13:20.839 subtype: nvme subsystem 00:13:20.839 treq: not required 00:13:20.839 portid: 0 00:13:20.839 trsvcid: 4420 00:13:20.839 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:20.839 traddr: 192.168.100.8 00:13:20.839 eflags: none 00:13:20.839 rdma_prtype: not specified 00:13:20.839 rdma_qptype: connected 00:13:20.839 rdma_cms: rdma-cm 00:13:20.839 rdma_pkey: 0x0000 00:13:20.839 =====Discovery Log Entry 5====== 00:13:20.839 trtype: rdma 00:13:20.839 adrfam: ipv4 00:13:20.839 subtype: discovery subsystem referral 00:13:20.839 treq: not required 00:13:20.839 portid: 0 00:13:20.839 trsvcid: 4430 00:13:20.839 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:20.839 traddr: 192.168.100.8 00:13:20.839 eflags: none 00:13:20.839 rdma_prtype: unrecognized 00:13:20.839 rdma_qptype: unrecognized 00:13:20.839 rdma_cms: unrecognized 00:13:20.839 rdma_pkey: 0x0000 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:20.839 Perform nvmf subsystem discovery via RPC 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.839 [ 00:13:20.839 { 00:13:20.839 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:20.839 "subtype": "Discovery", 00:13:20.839 "listen_addresses": [ 00:13:20.839 { 00:13:20.839 "trtype": "RDMA", 00:13:20.839 "adrfam": "IPv4", 00:13:20.839 "traddr": "192.168.100.8", 00:13:20.839 "trsvcid": "4420" 00:13:20.839 } 00:13:20.839 ], 00:13:20.839 "allow_any_host": true, 00:13:20.839 "hosts": [] 00:13:20.839 }, 00:13:20.839 { 00:13:20.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.839 "subtype": "NVMe", 00:13:20.839 "listen_addresses": [ 00:13:20.839 { 00:13:20.839 "trtype": "RDMA", 00:13:20.839 "adrfam": "IPv4", 00:13:20.839 "traddr": "192.168.100.8", 00:13:20.839 "trsvcid": "4420" 00:13:20.839 } 00:13:20.839 ], 00:13:20.839 "allow_any_host": true, 00:13:20.839 "hosts": [], 00:13:20.839 "serial_number": "SPDK00000000000001", 00:13:20.839 "model_number": "SPDK bdev Controller", 00:13:20.839 "max_namespaces": 32, 00:13:20.839 "min_cntlid": 1, 00:13:20.839 "max_cntlid": 65519, 00:13:20.839 "namespaces": [ 00:13:20.839 { 00:13:20.839 "nsid": 1, 00:13:20.839 "bdev_name": "Null1", 00:13:20.839 "name": "Null1", 00:13:20.839 "nguid": "A2A9270DA7B646E483A4816AE332F1D4", 00:13:20.839 "uuid": "a2a9270d-a7b6-46e4-83a4-816ae332f1d4" 00:13:20.839 } 00:13:20.839 ] 00:13:20.839 }, 00:13:20.839 { 00:13:20.839 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:20.839 "subtype": "NVMe", 00:13:20.839 "listen_addresses": [ 00:13:20.839 { 00:13:20.839 "trtype": "RDMA", 00:13:20.839 "adrfam": "IPv4", 00:13:20.839 "traddr": "192.168.100.8", 00:13:20.839 "trsvcid": "4420" 00:13:20.839 } 00:13:20.839 ], 00:13:20.839 "allow_any_host": true, 00:13:20.839 "hosts": [], 00:13:20.839 "serial_number": "SPDK00000000000002", 00:13:20.839 "model_number": "SPDK bdev Controller", 00:13:20.839 "max_namespaces": 32, 00:13:20.839 "min_cntlid": 1, 00:13:20.839 "max_cntlid": 65519, 00:13:20.839 "namespaces": [ 00:13:20.839 { 00:13:20.839 "nsid": 1, 00:13:20.839 "bdev_name": "Null2", 00:13:20.839 "name": "Null2", 00:13:20.839 "nguid": "2AC969E5CC194B1EB5D9B8C4D0BFED67", 00:13:20.839 "uuid": "2ac969e5-cc19-4b1e-b5d9-b8c4d0bfed67" 00:13:20.839 } 00:13:20.839 ] 00:13:20.839 }, 00:13:20.839 { 00:13:20.839 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:20.839 "subtype": "NVMe", 00:13:20.839 "listen_addresses": [ 00:13:20.839 { 00:13:20.839 "trtype": "RDMA", 00:13:20.839 "adrfam": "IPv4", 00:13:20.839 "traddr": "192.168.100.8", 00:13:20.839 "trsvcid": "4420" 00:13:20.839 } 00:13:20.839 ], 00:13:20.839 "allow_any_host": true, 00:13:20.839 "hosts": [], 00:13:20.839 "serial_number": "SPDK00000000000003", 00:13:20.839 "model_number": "SPDK bdev Controller", 00:13:20.839 "max_namespaces": 32, 00:13:20.839 "min_cntlid": 1, 00:13:20.839 "max_cntlid": 65519, 00:13:20.839 "namespaces": [ 00:13:20.839 { 00:13:20.839 "nsid": 1, 00:13:20.839 "bdev_name": "Null3", 00:13:20.839 "name": "Null3", 00:13:20.839 "nguid": "8E64C05676EF4DE6BA6969B95AF95CB3", 00:13:20.839 "uuid": "8e64c056-76ef-4de6-ba69-69b95af95cb3" 00:13:20.839 } 00:13:20.839 ] 00:13:20.839 }, 00:13:20.839 { 00:13:20.839 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:20.839 "subtype": "NVMe", 00:13:20.839 "listen_addresses": [ 00:13:20.839 { 00:13:20.839 "trtype": "RDMA", 00:13:20.839 "adrfam": "IPv4", 00:13:20.839 "traddr": "192.168.100.8", 00:13:20.839 "trsvcid": "4420" 00:13:20.839 } 00:13:20.839 ], 00:13:20.839 "allow_any_host": true, 00:13:20.839 "hosts": [], 00:13:20.839 "serial_number": "SPDK00000000000004", 00:13:20.839 "model_number": "SPDK bdev Controller", 00:13:20.839 "max_namespaces": 32, 00:13:20.839 "min_cntlid": 1, 00:13:20.839 "max_cntlid": 65519, 00:13:20.839 "namespaces": [ 00:13:20.839 { 00:13:20.839 "nsid": 1, 00:13:20.839 "bdev_name": "Null4", 00:13:20.839 "name": "Null4", 00:13:20.839 "nguid": "3B55276E6C4849CFB51997B5AEC8A857", 00:13:20.839 "uuid": "3b55276e-6c48-49cf-b519-97b5aec8a857" 00:13:20.839 } 00:13:20.839 ] 00:13:20.839 } 00:13:20.839 ] 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.839 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.840 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:20.840 rmmod nvme_rdma 00:13:20.840 rmmod nvme_fabrics 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 378480 ']' 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 378480 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 378480 ']' 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 378480 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 378480 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 378480' 00:13:21.099 killing process with pid 378480 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 378480 00:13:21.099 08:49:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 378480 00:13:21.358 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:21.358 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:13:21.358 00:13:21.358 real 0m7.317s 00:13:21.358 user 0m6.053s 00:13:21.358 sys 0m4.768s 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:21.359 ************************************ 00:13:21.359 END TEST nvmf_target_discovery 00:13:21.359 ************************************ 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.359 ************************************ 00:13:21.359 START TEST nvmf_referrals 00:13:21.359 ************************************ 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:21.359 * Looking for test storage... 00:13:21.359 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # lcov --version 00:13:21.359 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:21.618 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:21.618 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.618 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.618 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:21.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.619 --rc genhtml_branch_coverage=1 00:13:21.619 --rc genhtml_function_coverage=1 00:13:21.619 --rc genhtml_legend=1 00:13:21.619 --rc geninfo_all_blocks=1 00:13:21.619 --rc geninfo_unexecuted_blocks=1 00:13:21.619 00:13:21.619 ' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:21.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.619 --rc genhtml_branch_coverage=1 00:13:21.619 --rc genhtml_function_coverage=1 00:13:21.619 --rc genhtml_legend=1 00:13:21.619 --rc geninfo_all_blocks=1 00:13:21.619 --rc geninfo_unexecuted_blocks=1 00:13:21.619 00:13:21.619 ' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:21.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.619 --rc genhtml_branch_coverage=1 00:13:21.619 --rc genhtml_function_coverage=1 00:13:21.619 --rc genhtml_legend=1 00:13:21.619 --rc geninfo_all_blocks=1 00:13:21.619 --rc geninfo_unexecuted_blocks=1 00:13:21.619 00:13:21.619 ' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:21.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.619 --rc genhtml_branch_coverage=1 00:13:21.619 --rc genhtml_function_coverage=1 00:13:21.619 --rc genhtml_legend=1 00:13:21.619 --rc geninfo_all_blocks=1 00:13:21.619 --rc geninfo_unexecuted_blocks=1 00:13:21.619 00:13:21.619 ' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.619 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.619 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.620 08:49:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:28.193 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.193 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:28.194 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:28.194 Found net devices under 0000:da:00.0: mlx_0_0 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:28.194 Found net devices under 0000:da:00.1: mlx_0_1 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # rdma_device_init 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@528 -- # allocate_nic_ips 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:28.194 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.194 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:13:28.194 altname enp218s0f0np0 00:13:28.194 altname ens818f0np0 00:13:28.194 inet 192.168.100.8/24 scope global mlx_0_0 00:13:28.194 valid_lft forever preferred_lft forever 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:28.194 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.194 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:13:28.194 altname enp218s0f1np1 00:13:28.194 altname ens818f1np1 00:13:28.194 inet 192.168.100.9/24 scope global mlx_0_1 00:13:28.194 valid_lft forever preferred_lft forever 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:28.194 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:13:28.195 192.168.100.9' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:13:28.195 192.168.100.9' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # head -n 1 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:13:28.195 192.168.100.9' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # tail -n +2 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # head -n 1 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=381805 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 381805 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 381805 ']' 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.195 [2024-11-06 08:49:50.359344] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:13:28.195 [2024-11-06 08:49:50.359396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.195 [2024-11-06 08:49:50.419879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.195 [2024-11-06 08:49:50.464907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.195 [2024-11-06 08:49:50.464938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.195 [2024-11-06 08:49:50.464945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.195 [2024-11-06 08:49:50.464951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.195 [2024-11-06 08:49:50.464956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.195 [2024-11-06 08:49:50.466438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.195 [2024-11-06 08:49:50.466542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.195 [2024-11-06 08:49:50.466626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.195 [2024-11-06 08:49:50.466628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.195 [2024-11-06 08:49:50.625087] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x58dda0/0x592290) succeed. 00:13:28.195 [2024-11-06 08:49:50.633953] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x58f430/0x5d3930) succeed. 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.195 [2024-11-06 08:49:50.767905] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.195 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.196 08:49:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:28.196 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:28.456 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:28.716 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:28.976 08:49:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:29.236 rmmod nvme_rdma 00:13:29.236 rmmod nvme_fabrics 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 381805 ']' 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 381805 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 381805 ']' 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 381805 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 381805 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:29.236 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:29.237 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 381805' 00:13:29.237 killing process with pid 381805 00:13:29.237 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 381805 00:13:29.237 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 381805 00:13:29.496 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:29.496 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:13:29.496 00:13:29.496 real 0m8.197s 00:13:29.496 user 0m10.181s 00:13:29.496 sys 0m5.211s 00:13:29.496 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.496 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.496 ************************************ 00:13:29.496 END TEST nvmf_referrals 00:13:29.496 ************************************ 00:13:29.496 08:49:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:29.496 08:49:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:29.496 08:49:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.496 08:49:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.496 ************************************ 00:13:29.496 START TEST nvmf_connect_disconnect 00:13:29.496 ************************************ 00:13:29.496 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:29.756 * Looking for test storage... 00:13:29.756 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:29.756 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:29.756 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # lcov --version 00:13:29.756 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:29.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.757 --rc genhtml_branch_coverage=1 00:13:29.757 --rc genhtml_function_coverage=1 00:13:29.757 --rc genhtml_legend=1 00:13:29.757 --rc geninfo_all_blocks=1 00:13:29.757 --rc geninfo_unexecuted_blocks=1 00:13:29.757 00:13:29.757 ' 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:29.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.757 --rc genhtml_branch_coverage=1 00:13:29.757 --rc genhtml_function_coverage=1 00:13:29.757 --rc genhtml_legend=1 00:13:29.757 --rc geninfo_all_blocks=1 00:13:29.757 --rc geninfo_unexecuted_blocks=1 00:13:29.757 00:13:29.757 ' 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:29.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.757 --rc genhtml_branch_coverage=1 00:13:29.757 --rc genhtml_function_coverage=1 00:13:29.757 --rc genhtml_legend=1 00:13:29.757 --rc geninfo_all_blocks=1 00:13:29.757 --rc geninfo_unexecuted_blocks=1 00:13:29.757 00:13:29.757 ' 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:29.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.757 --rc genhtml_branch_coverage=1 00:13:29.757 --rc genhtml_function_coverage=1 00:13:29.757 --rc genhtml_legend=1 00:13:29.757 --rc geninfo_all_blocks=1 00:13:29.757 --rc geninfo_unexecuted_blocks=1 00:13:29.757 00:13:29.757 ' 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.757 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.758 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.758 08:49:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:36.331 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:36.331 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:36.331 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:36.332 Found net devices under 0000:da:00.0: mlx_0_0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:36.332 Found net devices under 0000:da:00.1: mlx_0_1 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # rdma_device_init 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@528 -- # allocate_nic_ips 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:36.332 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:36.332 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:13:36.332 altname enp218s0f0np0 00:13:36.332 altname ens818f0np0 00:13:36.332 inet 192.168.100.8/24 scope global mlx_0_0 00:13:36.332 valid_lft forever preferred_lft forever 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:36.332 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:36.332 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:13:36.332 altname enp218s0f1np1 00:13:36.332 altname ens818f1np1 00:13:36.332 inet 192.168.100.9/24 scope global mlx_0_1 00:13:36.332 valid_lft forever preferred_lft forever 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:36.332 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:13:36.333 192.168.100.9' 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:13:36.333 192.168.100.9' 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # head -n 1 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:13:36.333 192.168.100.9' 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # tail -n +2 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # head -n 1 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=385430 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 385430 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 385430 ']' 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 [2024-11-06 08:49:58.611933] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:13:36.333 [2024-11-06 08:49:58.611977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.333 [2024-11-06 08:49:58.685888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.333 [2024-11-06 08:49:58.725577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.333 [2024-11-06 08:49:58.725615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.333 [2024-11-06 08:49:58.725622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.333 [2024-11-06 08:49:58.725628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.333 [2024-11-06 08:49:58.725632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.333 [2024-11-06 08:49:58.727189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.333 [2024-11-06 08:49:58.727309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.333 [2024-11-06 08:49:58.727346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.333 [2024-11-06 08:49:58.727347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.333 08:49:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 [2024-11-06 08:49:58.877559] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:36.333 [2024-11-06 08:49:58.897900] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d54da0/0x1d59290) succeed. 00:13:36.333 [2024-11-06 08:49:58.906890] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d56430/0x1d9a930) succeed. 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.333 [2024-11-06 08:49:59.060482] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:36.333 08:49:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:40.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:56.426 rmmod nvme_rdma 00:13:56.426 rmmod nvme_fabrics 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 385430 ']' 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 385430 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 385430 ']' 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 385430 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.426 08:50:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 385430 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 385430' 00:13:56.426 killing process with pid 385430 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 385430 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 385430 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:13:56.426 00:13:56.426 real 0m26.758s 00:13:56.426 user 1m23.219s 00:13:56.426 sys 0m5.425s 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:56.426 ************************************ 00:13:56.426 END TEST nvmf_connect_disconnect 00:13:56.426 ************************************ 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.426 ************************************ 00:13:56.426 START TEST nvmf_multitarget 00:13:56.426 ************************************ 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:13:56.426 * Looking for test storage... 00:13:56.426 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # lcov --version 00:13:56.426 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:56.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.687 --rc genhtml_branch_coverage=1 00:13:56.687 --rc genhtml_function_coverage=1 00:13:56.687 --rc genhtml_legend=1 00:13:56.687 --rc geninfo_all_blocks=1 00:13:56.687 --rc geninfo_unexecuted_blocks=1 00:13:56.687 00:13:56.687 ' 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:56.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.687 --rc genhtml_branch_coverage=1 00:13:56.687 --rc genhtml_function_coverage=1 00:13:56.687 --rc genhtml_legend=1 00:13:56.687 --rc geninfo_all_blocks=1 00:13:56.687 --rc geninfo_unexecuted_blocks=1 00:13:56.687 00:13:56.687 ' 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:56.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.687 --rc genhtml_branch_coverage=1 00:13:56.687 --rc genhtml_function_coverage=1 00:13:56.687 --rc genhtml_legend=1 00:13:56.687 --rc geninfo_all_blocks=1 00:13:56.687 --rc geninfo_unexecuted_blocks=1 00:13:56.687 00:13:56.687 ' 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:56.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.687 --rc genhtml_branch_coverage=1 00:13:56.687 --rc genhtml_function_coverage=1 00:13:56.687 --rc genhtml_legend=1 00:13:56.687 --rc geninfo_all_blocks=1 00:13:56.687 --rc geninfo_unexecuted_blocks=1 00:13:56.687 00:13:56.687 ' 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.687 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.688 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.688 08:50:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:03.262 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:03.262 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:03.262 Found net devices under 0000:da:00.0: mlx_0_0 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:03.262 Found net devices under 0000:da:00.1: mlx_0_1 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # rdma_device_init 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:14:03.262 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@528 -- # allocate_nic_ips 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:03.263 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:03.263 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:14:03.263 altname enp218s0f0np0 00:14:03.263 altname ens818f0np0 00:14:03.263 inet 192.168.100.8/24 scope global mlx_0_0 00:14:03.263 valid_lft forever preferred_lft forever 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:03.263 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:03.263 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:14:03.263 altname enp218s0f1np1 00:14:03.263 altname ens818f1np1 00:14:03.263 inet 192.168.100.9/24 scope global mlx_0_1 00:14:03.263 valid_lft forever preferred_lft forever 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:14:03.263 192.168.100.9' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:14:03.263 192.168.100.9' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # head -n 1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:14:03.263 192.168.100.9' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # tail -n +2 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # head -n 1 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=392060 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 392060 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.263 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 392060 ']' 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:03.264 [2024-11-06 08:50:25.480638] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:14:03.264 [2024-11-06 08:50:25.480682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.264 [2024-11-06 08:50:25.555450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.264 [2024-11-06 08:50:25.597640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.264 [2024-11-06 08:50:25.597675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.264 [2024-11-06 08:50:25.597682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.264 [2024-11-06 08:50:25.597688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.264 [2024-11-06 08:50:25.597693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.264 [2024-11-06 08:50:25.599219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.264 [2024-11-06 08:50:25.599335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.264 [2024-11-06 08:50:25.599442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.264 [2024-11-06 08:50:25.599443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:03.264 "nvmf_tgt_1" 00:14:03.264 08:50:25 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:03.264 "nvmf_tgt_2" 00:14:03.264 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:03.264 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:03.264 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:03.264 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:03.523 true 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:03.523 true 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.523 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:03.523 rmmod nvme_rdma 00:14:03.782 rmmod nvme_fabrics 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 392060 ']' 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 392060 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 392060 ']' 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 392060 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392060 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392060' 00:14:03.782 killing process with pid 392060 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 392060 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 392060 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:14:03.782 00:14:03.782 real 0m7.464s 00:14:03.782 user 0m7.237s 00:14:03.782 sys 0m4.898s 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.782 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:03.782 ************************************ 00:14:03.782 END TEST nvmf_multitarget 00:14:03.782 ************************************ 00:14:04.042 08:50:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:04.042 08:50:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:04.042 08:50:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.042 08:50:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.042 ************************************ 00:14:04.042 START TEST nvmf_rpc 00:14:04.042 ************************************ 00:14:04.042 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:04.042 * Looking for test storage... 00:14:04.042 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:04.042 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:04.042 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:14:04.042 08:50:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:04.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.042 --rc genhtml_branch_coverage=1 00:14:04.042 --rc genhtml_function_coverage=1 00:14:04.042 --rc genhtml_legend=1 00:14:04.042 --rc geninfo_all_blocks=1 00:14:04.042 --rc geninfo_unexecuted_blocks=1 00:14:04.042 00:14:04.042 ' 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:04.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.042 --rc genhtml_branch_coverage=1 00:14:04.042 --rc genhtml_function_coverage=1 00:14:04.042 --rc genhtml_legend=1 00:14:04.042 --rc geninfo_all_blocks=1 00:14:04.042 --rc geninfo_unexecuted_blocks=1 00:14:04.042 00:14:04.042 ' 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:04.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.042 --rc genhtml_branch_coverage=1 00:14:04.042 --rc genhtml_function_coverage=1 00:14:04.042 --rc genhtml_legend=1 00:14:04.042 --rc geninfo_all_blocks=1 00:14:04.042 --rc geninfo_unexecuted_blocks=1 00:14:04.042 00:14:04.042 ' 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:04.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.042 --rc genhtml_branch_coverage=1 00:14:04.042 --rc genhtml_function_coverage=1 00:14:04.042 --rc genhtml_legend=1 00:14:04.042 --rc geninfo_all_blocks=1 00:14:04.042 --rc geninfo_unexecuted_blocks=1 00:14:04.042 00:14:04.042 ' 00:14:04.042 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:04.043 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:04.043 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:04.303 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:04.303 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:14:04.303 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.303 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:04.303 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:04.303 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:04.303 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.304 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.304 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.304 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:04.304 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:04.304 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:04.304 08:50:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:10.885 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:10.885 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:10.885 Found net devices under 0000:da:00.0: mlx_0_0 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.885 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:10.885 Found net devices under 0000:da:00.1: mlx_0_1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # rdma_device_init 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@528 -- # allocate_nic_ips 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:10.886 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:10.886 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:14:10.886 altname enp218s0f0np0 00:14:10.886 altname ens818f0np0 00:14:10.886 inet 192.168.100.8/24 scope global mlx_0_0 00:14:10.886 valid_lft forever preferred_lft forever 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:10.886 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:10.886 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:14:10.886 altname enp218s0f1np1 00:14:10.886 altname ens818f1np1 00:14:10.886 inet 192.168.100.9/24 scope global mlx_0_1 00:14:10.886 valid_lft forever preferred_lft forever 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:14:10.886 192.168.100.9' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:14:10.886 192.168.100.9' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # head -n 1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:14:10.886 192.168.100.9' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # tail -n +2 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # head -n 1 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:14:10.886 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=395391 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 395391 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 395391 ']' 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.887 08:50:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.887 [2024-11-06 08:50:32.983021] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:14:10.887 [2024-11-06 08:50:32.983065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.887 [2024-11-06 08:50:33.057573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.887 [2024-11-06 08:50:33.099829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.887 [2024-11-06 08:50:33.099863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.887 [2024-11-06 08:50:33.099870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.887 [2024-11-06 08:50:33.099876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.887 [2024-11-06 08:50:33.099881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.887 [2024-11-06 08:50:33.101439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.887 [2024-11-06 08:50:33.101545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.887 [2024-11-06 08:50:33.101666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.887 [2024-11-06 08:50:33.101667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:10.887 "tick_rate": 2100000000, 00:14:10.887 "poll_groups": [ 00:14:10.887 { 00:14:10.887 "name": "nvmf_tgt_poll_group_000", 00:14:10.887 "admin_qpairs": 0, 00:14:10.887 "io_qpairs": 0, 00:14:10.887 "current_admin_qpairs": 0, 00:14:10.887 "current_io_qpairs": 0, 00:14:10.887 "pending_bdev_io": 0, 00:14:10.887 "completed_nvme_io": 0, 00:14:10.887 "transports": [] 00:14:10.887 }, 00:14:10.887 { 00:14:10.887 "name": "nvmf_tgt_poll_group_001", 00:14:10.887 "admin_qpairs": 0, 00:14:10.887 "io_qpairs": 0, 00:14:10.887 "current_admin_qpairs": 0, 00:14:10.887 "current_io_qpairs": 0, 00:14:10.887 "pending_bdev_io": 0, 00:14:10.887 "completed_nvme_io": 0, 00:14:10.887 "transports": [] 00:14:10.887 }, 00:14:10.887 { 00:14:10.887 "name": "nvmf_tgt_poll_group_002", 00:14:10.887 "admin_qpairs": 0, 00:14:10.887 "io_qpairs": 0, 00:14:10.887 "current_admin_qpairs": 0, 00:14:10.887 "current_io_qpairs": 0, 00:14:10.887 "pending_bdev_io": 0, 00:14:10.887 "completed_nvme_io": 0, 00:14:10.887 "transports": [] 00:14:10.887 }, 00:14:10.887 { 00:14:10.887 "name": "nvmf_tgt_poll_group_003", 00:14:10.887 "admin_qpairs": 0, 00:14:10.887 "io_qpairs": 0, 00:14:10.887 "current_admin_qpairs": 0, 00:14:10.887 "current_io_qpairs": 0, 00:14:10.887 "pending_bdev_io": 0, 00:14:10.887 "completed_nvme_io": 0, 00:14:10.887 "transports": [] 00:14:10.887 } 00:14:10.887 ] 00:14:10.887 }' 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.887 [2024-11-06 08:50:33.372748] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x748e00/0x74d2f0) succeed. 00:14:10.887 [2024-11-06 08:50:33.381790] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x74a490/0x78e990) succeed. 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.887 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:10.887 "tick_rate": 2100000000, 00:14:10.887 "poll_groups": [ 00:14:10.887 { 00:14:10.887 "name": "nvmf_tgt_poll_group_000", 00:14:10.887 "admin_qpairs": 0, 00:14:10.887 "io_qpairs": 0, 00:14:10.887 "current_admin_qpairs": 0, 00:14:10.887 "current_io_qpairs": 0, 00:14:10.887 "pending_bdev_io": 0, 00:14:10.887 "completed_nvme_io": 0, 00:14:10.887 "transports": [ 00:14:10.887 { 00:14:10.887 "trtype": "RDMA", 00:14:10.887 "pending_data_buffer": 0, 00:14:10.887 "devices": [ 00:14:10.887 { 00:14:10.887 "name": "mlx5_0", 00:14:10.887 "polls": 15502, 00:14:10.887 "idle_polls": 15502, 00:14:10.887 "completions": 0, 00:14:10.887 "requests": 0, 00:14:10.887 "request_latency": 0, 00:14:10.887 "pending_free_request": 0, 00:14:10.887 "pending_rdma_read": 0, 00:14:10.887 "pending_rdma_write": 0, 00:14:10.887 "pending_rdma_send": 0, 00:14:10.887 "total_send_wrs": 0, 00:14:10.887 "send_doorbell_updates": 0, 00:14:10.887 "total_recv_wrs": 4096, 00:14:10.887 "recv_doorbell_updates": 1 00:14:10.887 }, 00:14:10.887 { 00:14:10.887 "name": "mlx5_1", 00:14:10.887 "polls": 15502, 00:14:10.887 "idle_polls": 15502, 00:14:10.887 "completions": 0, 00:14:10.887 "requests": 0, 00:14:10.887 "request_latency": 0, 00:14:10.887 "pending_free_request": 0, 00:14:10.887 "pending_rdma_read": 0, 00:14:10.887 "pending_rdma_write": 0, 00:14:10.887 "pending_rdma_send": 0, 00:14:10.887 "total_send_wrs": 0, 00:14:10.887 "send_doorbell_updates": 0, 00:14:10.887 "total_recv_wrs": 4096, 00:14:10.887 "recv_doorbell_updates": 1 00:14:10.887 } 00:14:10.887 ] 00:14:10.887 } 00:14:10.887 ] 00:14:10.887 }, 00:14:10.887 { 00:14:10.887 "name": "nvmf_tgt_poll_group_001", 00:14:10.887 "admin_qpairs": 0, 00:14:10.887 "io_qpairs": 0, 00:14:10.887 "current_admin_qpairs": 0, 00:14:10.887 "current_io_qpairs": 0, 00:14:10.887 "pending_bdev_io": 0, 00:14:10.887 "completed_nvme_io": 0, 00:14:10.887 "transports": [ 00:14:10.887 { 00:14:10.887 "trtype": "RDMA", 00:14:10.887 "pending_data_buffer": 0, 00:14:10.887 "devices": [ 00:14:10.887 { 00:14:10.887 "name": "mlx5_0", 00:14:10.887 "polls": 10159, 00:14:10.887 "idle_polls": 10159, 00:14:10.887 "completions": 0, 00:14:10.887 "requests": 0, 00:14:10.887 "request_latency": 0, 00:14:10.887 "pending_free_request": 0, 00:14:10.887 "pending_rdma_read": 0, 00:14:10.887 "pending_rdma_write": 0, 00:14:10.887 "pending_rdma_send": 0, 00:14:10.887 "total_send_wrs": 0, 00:14:10.887 "send_doorbell_updates": 0, 00:14:10.887 "total_recv_wrs": 4096, 00:14:10.887 "recv_doorbell_updates": 1 00:14:10.887 }, 00:14:10.887 { 00:14:10.887 "name": "mlx5_1", 00:14:10.887 "polls": 10159, 00:14:10.887 "idle_polls": 10159, 00:14:10.887 "completions": 0, 00:14:10.887 "requests": 0, 00:14:10.887 "request_latency": 0, 00:14:10.888 "pending_free_request": 0, 00:14:10.888 "pending_rdma_read": 0, 00:14:10.888 "pending_rdma_write": 0, 00:14:10.888 "pending_rdma_send": 0, 00:14:10.888 "total_send_wrs": 0, 00:14:10.888 "send_doorbell_updates": 0, 00:14:10.888 "total_recv_wrs": 4096, 00:14:10.888 "recv_doorbell_updates": 1 00:14:10.888 } 00:14:10.888 ] 00:14:10.888 } 00:14:10.888 ] 00:14:10.888 }, 00:14:10.888 { 00:14:10.888 "name": "nvmf_tgt_poll_group_002", 00:14:10.888 "admin_qpairs": 0, 00:14:10.888 "io_qpairs": 0, 00:14:10.888 "current_admin_qpairs": 0, 00:14:10.888 "current_io_qpairs": 0, 00:14:10.888 "pending_bdev_io": 0, 00:14:10.888 "completed_nvme_io": 0, 00:14:10.888 "transports": [ 00:14:10.888 { 00:14:10.888 "trtype": "RDMA", 00:14:10.888 "pending_data_buffer": 0, 00:14:10.888 "devices": [ 00:14:10.888 { 00:14:10.888 "name": "mlx5_0", 00:14:10.888 "polls": 5383, 00:14:10.888 "idle_polls": 5383, 00:14:10.888 "completions": 0, 00:14:10.888 "requests": 0, 00:14:10.888 "request_latency": 0, 00:14:10.888 "pending_free_request": 0, 00:14:10.888 "pending_rdma_read": 0, 00:14:10.888 "pending_rdma_write": 0, 00:14:10.888 "pending_rdma_send": 0, 00:14:10.888 "total_send_wrs": 0, 00:14:10.888 "send_doorbell_updates": 0, 00:14:10.888 "total_recv_wrs": 4096, 00:14:10.888 "recv_doorbell_updates": 1 00:14:10.888 }, 00:14:10.888 { 00:14:10.888 "name": "mlx5_1", 00:14:10.888 "polls": 5383, 00:14:10.888 "idle_polls": 5383, 00:14:10.888 "completions": 0, 00:14:10.888 "requests": 0, 00:14:10.888 "request_latency": 0, 00:14:10.888 "pending_free_request": 0, 00:14:10.888 "pending_rdma_read": 0, 00:14:10.888 "pending_rdma_write": 0, 00:14:10.888 "pending_rdma_send": 0, 00:14:10.888 "total_send_wrs": 0, 00:14:10.888 "send_doorbell_updates": 0, 00:14:10.888 "total_recv_wrs": 4096, 00:14:10.888 "recv_doorbell_updates": 1 00:14:10.888 } 00:14:10.888 ] 00:14:10.888 } 00:14:10.888 ] 00:14:10.888 }, 00:14:10.888 { 00:14:10.888 "name": "nvmf_tgt_poll_group_003", 00:14:10.888 "admin_qpairs": 0, 00:14:10.888 "io_qpairs": 0, 00:14:10.888 "current_admin_qpairs": 0, 00:14:10.888 "current_io_qpairs": 0, 00:14:10.888 "pending_bdev_io": 0, 00:14:10.888 "completed_nvme_io": 0, 00:14:10.888 "transports": [ 00:14:10.888 { 00:14:10.888 "trtype": "RDMA", 00:14:10.888 "pending_data_buffer": 0, 00:14:10.888 "devices": [ 00:14:10.888 { 00:14:10.888 "name": "mlx5_0", 00:14:10.888 "polls": 910, 00:14:10.888 "idle_polls": 910, 00:14:10.888 "completions": 0, 00:14:10.888 "requests": 0, 00:14:10.888 "request_latency": 0, 00:14:10.888 "pending_free_request": 0, 00:14:10.888 "pending_rdma_read": 0, 00:14:10.888 "pending_rdma_write": 0, 00:14:10.888 "pending_rdma_send": 0, 00:14:10.888 "total_send_wrs": 0, 00:14:10.888 "send_doorbell_updates": 0, 00:14:10.888 "total_recv_wrs": 4096, 00:14:10.888 "recv_doorbell_updates": 1 00:14:10.888 }, 00:14:10.888 { 00:14:10.888 "name": "mlx5_1", 00:14:10.888 "polls": 910, 00:14:10.888 "idle_polls": 910, 00:14:10.888 "completions": 0, 00:14:10.888 "requests": 0, 00:14:10.888 "request_latency": 0, 00:14:10.888 "pending_free_request": 0, 00:14:10.888 "pending_rdma_read": 0, 00:14:10.888 "pending_rdma_write": 0, 00:14:10.888 "pending_rdma_send": 0, 00:14:10.888 "total_send_wrs": 0, 00:14:10.888 "send_doorbell_updates": 0, 00:14:10.888 "total_recv_wrs": 4096, 00:14:10.888 "recv_doorbell_updates": 1 00:14:10.888 } 00:14:10.888 ] 00:14:10.888 } 00:14:10.888 ] 00:14:10.888 } 00:14:10.888 ] 00:14:10.888 }' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.888 Malloc1 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.888 [2024-11-06 08:50:33.830197] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:14:10.888 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:10.889 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:14:10.889 [2024-11-06 08:50:33.876368] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:14:11.148 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:11.148 could not add new controller: failed to write to nvme-fabrics device 00:14:11.148 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:11.148 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.148 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.148 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.148 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:11.148 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.148 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.148 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.148 08:50:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:12.085 08:50:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.085 08:50:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:12.085 08:50:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.085 08:50:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:12.085 08:50:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:13.992 08:50:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:13.992 08:50:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:13.992 08:50:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.992 08:50:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:13.992 08:50:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.992 08:50:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:13.992 08:50:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.929 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.929 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:14.929 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:14.929 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.929 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:14.930 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:14.930 [2024-11-06 08:50:37.938283] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:14:15.189 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:15.189 could not add new controller: failed to write to nvme-fabrics device 00:14:15.189 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:15.189 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.189 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.189 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.189 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:15.189 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.189 08:50:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.189 08:50:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.189 08:50:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:16.126 08:50:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.126 08:50:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:16.126 08:50:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.126 08:50:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:16.126 08:50:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:18.032 08:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:18.033 08:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:18.033 08:50:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.033 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:18.033 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.033 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:18.033 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.970 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.229 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.229 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:19.229 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:19.229 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:19.229 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.229 08:50:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.229 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.229 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:19.229 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.229 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.229 [2024-11-06 08:50:42.011892] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:19.229 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.230 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:19.230 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.230 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.230 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.230 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:19.230 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.230 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.230 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.230 08:50:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:20.167 08:50:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.167 08:50:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.167 08:50:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.167 08:50:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:20.167 08:50:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:22.074 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:22.074 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:22.074 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.074 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:22.074 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.074 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:22.074 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.011 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.011 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:23.011 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:23.011 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.011 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:23.011 08:50:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.011 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:23.011 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.011 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.011 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.011 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.011 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.011 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.011 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.271 [2024-11-06 08:50:46.036587] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.271 08:50:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:24.208 08:50:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.208 08:50:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:24.208 08:50:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.208 08:50:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:24.208 08:50:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:26.113 08:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:26.113 08:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:26.113 08:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.113 08:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:26.113 08:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.113 08:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:26.113 08:50:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.049 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.308 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.308 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:27.308 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.308 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.308 [2024-11-06 08:50:50.070352] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:27.308 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.308 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:27.309 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.309 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.309 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.309 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:27.309 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.309 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.309 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.309 08:50:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:28.246 08:50:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.246 08:50:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:28.246 08:50:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.246 08:50:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:28.246 08:50:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:30.151 08:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:30.151 08:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:30.151 08:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.151 08:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:30.151 08:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.151 08:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:30.151 08:50:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.088 [2024-11-06 08:50:54.090973] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.088 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.347 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.347 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.347 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.347 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.347 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.347 08:50:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:32.285 08:50:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.285 08:50:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:32.285 08:50:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.285 08:50:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:32.285 08:50:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:34.190 08:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:34.190 08:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:34.190 08:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.190 08:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:34.190 08:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.190 08:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:34.190 08:50:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.128 [2024-11-06 08:50:58.119177] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.128 08:50:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:36.506 08:50:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.506 08:50:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:36.506 08:50:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.506 08:50:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:36.506 08:50:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:38.410 08:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:38.410 08:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:38.410 08:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.410 08:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:38.410 08:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.410 08:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:38.410 08:51:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 [2024-11-06 08:51:02.147916] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 [2024-11-06 08:51:02.196278] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 [2024-11-06 08:51:02.244465] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 [2024-11-06 08:51:02.292649] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 [2024-11-06 08:51:02.340833] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.353 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.613 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:39.613 "tick_rate": 2100000000, 00:14:39.613 "poll_groups": [ 00:14:39.613 { 00:14:39.613 "name": "nvmf_tgt_poll_group_000", 00:14:39.613 "admin_qpairs": 2, 00:14:39.613 "io_qpairs": 27, 00:14:39.613 "current_admin_qpairs": 0, 00:14:39.613 "current_io_qpairs": 0, 00:14:39.613 "pending_bdev_io": 0, 00:14:39.613 "completed_nvme_io": 103, 00:14:39.613 "transports": [ 00:14:39.613 { 00:14:39.613 "trtype": "RDMA", 00:14:39.613 "pending_data_buffer": 0, 00:14:39.613 "devices": [ 00:14:39.613 { 00:14:39.613 "name": "mlx5_0", 00:14:39.613 "polls": 3595588, 00:14:39.613 "idle_polls": 3595309, 00:14:39.613 "completions": 319, 00:14:39.613 "requests": 159, 00:14:39.613 "request_latency": 29091276, 00:14:39.613 "pending_free_request": 0, 00:14:39.613 "pending_rdma_read": 0, 00:14:39.613 "pending_rdma_write": 0, 00:14:39.613 "pending_rdma_send": 0, 00:14:39.613 "total_send_wrs": 262, 00:14:39.613 "send_doorbell_updates": 135, 00:14:39.613 "total_recv_wrs": 4255, 00:14:39.613 "recv_doorbell_updates": 135 00:14:39.613 }, 00:14:39.613 { 00:14:39.613 "name": "mlx5_1", 00:14:39.613 "polls": 3595588, 00:14:39.614 "idle_polls": 3595588, 00:14:39.614 "completions": 0, 00:14:39.614 "requests": 0, 00:14:39.614 "request_latency": 0, 00:14:39.614 "pending_free_request": 0, 00:14:39.614 "pending_rdma_read": 0, 00:14:39.614 "pending_rdma_write": 0, 00:14:39.614 "pending_rdma_send": 0, 00:14:39.614 "total_send_wrs": 0, 00:14:39.614 "send_doorbell_updates": 0, 00:14:39.614 "total_recv_wrs": 4096, 00:14:39.614 "recv_doorbell_updates": 1 00:14:39.614 } 00:14:39.614 ] 00:14:39.614 } 00:14:39.614 ] 00:14:39.614 }, 00:14:39.614 { 00:14:39.614 "name": "nvmf_tgt_poll_group_001", 00:14:39.614 "admin_qpairs": 2, 00:14:39.614 "io_qpairs": 26, 00:14:39.614 "current_admin_qpairs": 0, 00:14:39.614 "current_io_qpairs": 0, 00:14:39.614 "pending_bdev_io": 0, 00:14:39.614 "completed_nvme_io": 148, 00:14:39.614 "transports": [ 00:14:39.614 { 00:14:39.614 "trtype": "RDMA", 00:14:39.614 "pending_data_buffer": 0, 00:14:39.614 "devices": [ 00:14:39.614 { 00:14:39.614 "name": "mlx5_0", 00:14:39.614 "polls": 3620079, 00:14:39.614 "idle_polls": 3619717, 00:14:39.614 "completions": 402, 00:14:39.614 "requests": 201, 00:14:39.614 "request_latency": 32815034, 00:14:39.614 "pending_free_request": 0, 00:14:39.614 "pending_rdma_read": 0, 00:14:39.614 "pending_rdma_write": 0, 00:14:39.614 "pending_rdma_send": 0, 00:14:39.614 "total_send_wrs": 348, 00:14:39.614 "send_doorbell_updates": 176, 00:14:39.614 "total_recv_wrs": 4297, 00:14:39.614 "recv_doorbell_updates": 177 00:14:39.614 }, 00:14:39.614 { 00:14:39.614 "name": "mlx5_1", 00:14:39.614 "polls": 3620079, 00:14:39.614 "idle_polls": 3620079, 00:14:39.614 "completions": 0, 00:14:39.614 "requests": 0, 00:14:39.614 "request_latency": 0, 00:14:39.614 "pending_free_request": 0, 00:14:39.614 "pending_rdma_read": 0, 00:14:39.614 "pending_rdma_write": 0, 00:14:39.614 "pending_rdma_send": 0, 00:14:39.614 "total_send_wrs": 0, 00:14:39.614 "send_doorbell_updates": 0, 00:14:39.614 "total_recv_wrs": 4096, 00:14:39.614 "recv_doorbell_updates": 1 00:14:39.614 } 00:14:39.614 ] 00:14:39.614 } 00:14:39.614 ] 00:14:39.614 }, 00:14:39.614 { 00:14:39.614 "name": "nvmf_tgt_poll_group_002", 00:14:39.614 "admin_qpairs": 1, 00:14:39.614 "io_qpairs": 26, 00:14:39.614 "current_admin_qpairs": 0, 00:14:39.614 "current_io_qpairs": 0, 00:14:39.614 "pending_bdev_io": 0, 00:14:39.614 "completed_nvme_io": 127, 00:14:39.614 "transports": [ 00:14:39.614 { 00:14:39.614 "trtype": "RDMA", 00:14:39.614 "pending_data_buffer": 0, 00:14:39.614 "devices": [ 00:14:39.614 { 00:14:39.614 "name": "mlx5_0", 00:14:39.614 "polls": 3569687, 00:14:39.614 "idle_polls": 3569417, 00:14:39.614 "completions": 309, 00:14:39.614 "requests": 154, 00:14:39.614 "request_latency": 29225136, 00:14:39.614 "pending_free_request": 0, 00:14:39.614 "pending_rdma_read": 0, 00:14:39.614 "pending_rdma_write": 0, 00:14:39.614 "pending_rdma_send": 0, 00:14:39.614 "total_send_wrs": 268, 00:14:39.614 "send_doorbell_updates": 131, 00:14:39.614 "total_recv_wrs": 4250, 00:14:39.614 "recv_doorbell_updates": 131 00:14:39.614 }, 00:14:39.614 { 00:14:39.614 "name": "mlx5_1", 00:14:39.614 "polls": 3569687, 00:14:39.614 "idle_polls": 3569687, 00:14:39.614 "completions": 0, 00:14:39.614 "requests": 0, 00:14:39.614 "request_latency": 0, 00:14:39.614 "pending_free_request": 0, 00:14:39.614 "pending_rdma_read": 0, 00:14:39.614 "pending_rdma_write": 0, 00:14:39.614 "pending_rdma_send": 0, 00:14:39.614 "total_send_wrs": 0, 00:14:39.614 "send_doorbell_updates": 0, 00:14:39.614 "total_recv_wrs": 4096, 00:14:39.614 "recv_doorbell_updates": 1 00:14:39.614 } 00:14:39.614 ] 00:14:39.614 } 00:14:39.614 ] 00:14:39.614 }, 00:14:39.614 { 00:14:39.614 "name": "nvmf_tgt_poll_group_003", 00:14:39.614 "admin_qpairs": 2, 00:14:39.614 "io_qpairs": 26, 00:14:39.614 "current_admin_qpairs": 0, 00:14:39.614 "current_io_qpairs": 0, 00:14:39.614 "pending_bdev_io": 0, 00:14:39.614 "completed_nvme_io": 77, 00:14:39.614 "transports": [ 00:14:39.614 { 00:14:39.614 "trtype": "RDMA", 00:14:39.614 "pending_data_buffer": 0, 00:14:39.614 "devices": [ 00:14:39.614 { 00:14:39.614 "name": "mlx5_0", 00:14:39.614 "polls": 2792296, 00:14:39.614 "idle_polls": 2792055, 00:14:39.614 "completions": 262, 00:14:39.614 "requests": 131, 00:14:39.614 "request_latency": 19780356, 00:14:39.614 "pending_free_request": 0, 00:14:39.614 "pending_rdma_read": 0, 00:14:39.614 "pending_rdma_write": 0, 00:14:39.614 "pending_rdma_send": 0, 00:14:39.614 "total_send_wrs": 208, 00:14:39.614 "send_doorbell_updates": 119, 00:14:39.614 "total_recv_wrs": 4227, 00:14:39.614 "recv_doorbell_updates": 120 00:14:39.614 }, 00:14:39.614 { 00:14:39.614 "name": "mlx5_1", 00:14:39.614 "polls": 2792296, 00:14:39.614 "idle_polls": 2792296, 00:14:39.614 "completions": 0, 00:14:39.614 "requests": 0, 00:14:39.614 "request_latency": 0, 00:14:39.614 "pending_free_request": 0, 00:14:39.614 "pending_rdma_read": 0, 00:14:39.614 "pending_rdma_write": 0, 00:14:39.614 "pending_rdma_send": 0, 00:14:39.614 "total_send_wrs": 0, 00:14:39.614 "send_doorbell_updates": 0, 00:14:39.614 "total_recv_wrs": 4096, 00:14:39.614 "recv_doorbell_updates": 1 00:14:39.614 } 00:14:39.614 ] 00:14:39.614 } 00:14:39.614 ] 00:14:39.614 } 00:14:39.614 ] 00:14:39.614 }' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1292 > 0 )) 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 110911802 > 0 )) 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:39.614 rmmod nvme_rdma 00:14:39.614 rmmod nvme_fabrics 00:14:39.614 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 395391 ']' 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 395391 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 395391 ']' 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 395391 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 395391 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 395391' 00:14:39.874 killing process with pid 395391 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 395391 00:14:39.874 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 395391 00:14:40.134 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:40.134 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:14:40.134 00:14:40.134 real 0m36.102s 00:14:40.134 user 2m0.900s 00:14:40.134 sys 0m5.939s 00:14:40.134 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.134 08:51:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.134 ************************************ 00:14:40.134 END TEST nvmf_rpc 00:14:40.134 ************************************ 00:14:40.134 08:51:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:40.134 08:51:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:40.134 08:51:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.134 08:51:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.134 ************************************ 00:14:40.134 START TEST nvmf_invalid 00:14:40.134 ************************************ 00:14:40.134 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:40.134 * Looking for test storage... 00:14:40.134 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:40.134 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:40.134 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # lcov --version 00:14:40.134 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:40.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.394 --rc genhtml_branch_coverage=1 00:14:40.394 --rc genhtml_function_coverage=1 00:14:40.394 --rc genhtml_legend=1 00:14:40.394 --rc geninfo_all_blocks=1 00:14:40.394 --rc geninfo_unexecuted_blocks=1 00:14:40.394 00:14:40.394 ' 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:40.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.394 --rc genhtml_branch_coverage=1 00:14:40.394 --rc genhtml_function_coverage=1 00:14:40.394 --rc genhtml_legend=1 00:14:40.394 --rc geninfo_all_blocks=1 00:14:40.394 --rc geninfo_unexecuted_blocks=1 00:14:40.394 00:14:40.394 ' 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:40.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.394 --rc genhtml_branch_coverage=1 00:14:40.394 --rc genhtml_function_coverage=1 00:14:40.394 --rc genhtml_legend=1 00:14:40.394 --rc geninfo_all_blocks=1 00:14:40.394 --rc geninfo_unexecuted_blocks=1 00:14:40.394 00:14:40.394 ' 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:40.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.394 --rc genhtml_branch_coverage=1 00:14:40.394 --rc genhtml_function_coverage=1 00:14:40.394 --rc genhtml_legend=1 00:14:40.394 --rc geninfo_all_blocks=1 00:14:40.394 --rc geninfo_unexecuted_blocks=1 00:14:40.394 00:14:40.394 ' 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.394 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.395 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:40.395 08:51:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:46.966 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:46.966 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:46.966 Found net devices under 0000:da:00.0: mlx_0_0 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:46.966 Found net devices under 0000:da:00.1: mlx_0_1 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # rdma_device_init 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@528 -- # allocate_nic_ips 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:46.966 08:51:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:46.966 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:46.966 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:14:46.966 altname enp218s0f0np0 00:14:46.966 altname ens818f0np0 00:14:46.966 inet 192.168.100.8/24 scope global mlx_0_0 00:14:46.966 valid_lft forever preferred_lft forever 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:46.966 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:46.966 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:14:46.966 altname enp218s0f1np1 00:14:46.966 altname ens818f1np1 00:14:46.966 inet 192.168.100.9/24 scope global mlx_0_1 00:14:46.966 valid_lft forever preferred_lft forever 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:14:46.966 192.168.100.9' 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:14:46.966 192.168.100.9' 00:14:46.966 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # head -n 1 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:14:46.967 192.168.100.9' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # tail -n +2 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # head -n 1 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=404096 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 404096 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 404096 ']' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:46.967 [2024-11-06 08:51:09.170157] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:14:46.967 [2024-11-06 08:51:09.170212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.967 [2024-11-06 08:51:09.245110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.967 [2024-11-06 08:51:09.288505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.967 [2024-11-06 08:51:09.288540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.967 [2024-11-06 08:51:09.288547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.967 [2024-11-06 08:51:09.288553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.967 [2024-11-06 08:51:09.288558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.967 [2024-11-06 08:51:09.289970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.967 [2024-11-06 08:51:09.290077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.967 [2024-11-06 08:51:09.290183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.967 [2024-11-06 08:51:09.290184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6945 00:14:46.967 [2024-11-06 08:51:09.600718] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:46.967 { 00:14:46.967 "nqn": "nqn.2016-06.io.spdk:cnode6945", 00:14:46.967 "tgt_name": "foobar", 00:14:46.967 "method": "nvmf_create_subsystem", 00:14:46.967 "req_id": 1 00:14:46.967 } 00:14:46.967 Got JSON-RPC error response 00:14:46.967 response: 00:14:46.967 { 00:14:46.967 "code": -32603, 00:14:46.967 "message": "Unable to find target foobar" 00:14:46.967 }' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:46.967 { 00:14:46.967 "nqn": "nqn.2016-06.io.spdk:cnode6945", 00:14:46.967 "tgt_name": "foobar", 00:14:46.967 "method": "nvmf_create_subsystem", 00:14:46.967 "req_id": 1 00:14:46.967 } 00:14:46.967 Got JSON-RPC error response 00:14:46.967 response: 00:14:46.967 { 00:14:46.967 "code": -32603, 00:14:46.967 "message": "Unable to find target foobar" 00:14:46.967 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6640 00:14:46.967 [2024-11-06 08:51:09.813442] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6640: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:46.967 { 00:14:46.967 "nqn": "nqn.2016-06.io.spdk:cnode6640", 00:14:46.967 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:46.967 "method": "nvmf_create_subsystem", 00:14:46.967 "req_id": 1 00:14:46.967 } 00:14:46.967 Got JSON-RPC error response 00:14:46.967 response: 00:14:46.967 { 00:14:46.967 "code": -32602, 00:14:46.967 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:46.967 }' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:46.967 { 00:14:46.967 "nqn": "nqn.2016-06.io.spdk:cnode6640", 00:14:46.967 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:46.967 "method": "nvmf_create_subsystem", 00:14:46.967 "req_id": 1 00:14:46.967 } 00:14:46.967 Got JSON-RPC error response 00:14:46.967 response: 00:14:46.967 { 00:14:46.967 "code": -32602, 00:14:46.967 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:46.967 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:46.967 08:51:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16692 00:14:47.227 [2024-11-06 08:51:10.042234] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16692: invalid model number 'SPDK_Controller' 00:14:47.227 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:47.227 { 00:14:47.227 "nqn": "nqn.2016-06.io.spdk:cnode16692", 00:14:47.227 "model_number": "SPDK_Controller\u001f", 00:14:47.227 "method": "nvmf_create_subsystem", 00:14:47.227 "req_id": 1 00:14:47.227 } 00:14:47.227 Got JSON-RPC error response 00:14:47.227 response: 00:14:47.227 { 00:14:47.227 "code": -32602, 00:14:47.227 "message": "Invalid MN SPDK_Controller\u001f" 00:14:47.227 }' 00:14:47.227 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:47.227 { 00:14:47.227 "nqn": "nqn.2016-06.io.spdk:cnode16692", 00:14:47.227 "model_number": "SPDK_Controller\u001f", 00:14:47.227 "method": "nvmf_create_subsystem", 00:14:47.227 "req_id": 1 00:14:47.227 } 00:14:47.227 Got JSON-RPC error response 00:14:47.227 response: 00:14:47.227 { 00:14:47.227 "code": -32602, 00:14:47.227 "message": "Invalid MN SPDK_Controller\u001f" 00:14:47.227 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:47.227 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:47.227 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ / == \- ]] 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '/Z\JJcWrmr3oI7k^%7`7w' 00:14:47.228 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '/Z\JJcWrmr3oI7k^%7`7w' nqn.2016-06.io.spdk:cnode8448 00:14:47.489 [2024-11-06 08:51:10.375389] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8448: invalid serial number '/Z\JJcWrmr3oI7k^%7`7w' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:47.489 { 00:14:47.489 "nqn": "nqn.2016-06.io.spdk:cnode8448", 00:14:47.489 "serial_number": "/Z\\JJcWrmr3oI7k^%7`7w", 00:14:47.489 "method": "nvmf_create_subsystem", 00:14:47.489 "req_id": 1 00:14:47.489 } 00:14:47.489 Got JSON-RPC error response 00:14:47.489 response: 00:14:47.489 { 00:14:47.489 "code": -32602, 00:14:47.489 "message": "Invalid SN /Z\\JJcWrmr3oI7k^%7`7w" 00:14:47.489 }' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:47.489 { 00:14:47.489 "nqn": "nqn.2016-06.io.spdk:cnode8448", 00:14:47.489 "serial_number": "/Z\\JJcWrmr3oI7k^%7`7w", 00:14:47.489 "method": "nvmf_create_subsystem", 00:14:47.489 "req_id": 1 00:14:47.489 } 00:14:47.489 Got JSON-RPC error response 00:14:47.489 response: 00:14:47.489 { 00:14:47.489 "code": -32602, 00:14:47.489 "message": "Invalid SN /Z\\JJcWrmr3oI7k^%7`7w" 00:14:47.489 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.489 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:47.490 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:47.750 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '(fo}sBaOm\O=d0po95GX7xur(TCQnY+#YdeW64"-' 00:14:47.751 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '(fo}sBaOm\O=d0po95GX7xur(TCQnY+#YdeW64"-' nqn.2016-06.io.spdk:cnode9064 00:14:48.010 [2024-11-06 08:51:10.836872] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9064: invalid model number '(fo}sBaOm\O=d0po95GX7xur(TCQnY+#YdeW64"-' 00:14:48.010 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:48.010 { 00:14:48.010 "nqn": "nqn.2016-06.io.spdk:cnode9064", 00:14:48.010 "model_number": "(fo}sBaOm\\O=d0po95GX7xur(TCQnY\u007f+#YdeW64\"-", 00:14:48.010 "method": "nvmf_create_subsystem", 00:14:48.010 "req_id": 1 00:14:48.010 } 00:14:48.010 Got JSON-RPC error response 00:14:48.010 response: 00:14:48.010 { 00:14:48.010 "code": -32602, 00:14:48.010 "message": "Invalid MN (fo}sBaOm\\O=d0po95GX7xur(TCQnY\u007f+#YdeW64\"-" 00:14:48.010 }' 00:14:48.010 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:48.010 { 00:14:48.010 "nqn": "nqn.2016-06.io.spdk:cnode9064", 00:14:48.010 "model_number": "(fo}sBaOm\\O=d0po95GX7xur(TCQnY\u007f+#YdeW64\"-", 00:14:48.010 "method": "nvmf_create_subsystem", 00:14:48.010 "req_id": 1 00:14:48.010 } 00:14:48.010 Got JSON-RPC error response 00:14:48.010 response: 00:14:48.010 { 00:14:48.010 "code": -32602, 00:14:48.010 "message": "Invalid MN (fo}sBaOm\\O=d0po95GX7xur(TCQnY\u007f+#YdeW64\"-" 00:14:48.010 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:48.010 08:51:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:14:48.268 [2024-11-06 08:51:11.053966] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d796c0/0x1d7dbb0) succeed. 00:14:48.269 [2024-11-06 08:51:11.062924] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d7ad50/0x1dbf250) succeed. 00:14:48.269 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:48.527 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:14:48.527 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:14:48.527 192.168.100.9' 00:14:48.527 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:48.527 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:14:48.527 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:14:48.786 [2024-11-06 08:51:11.597944] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:48.786 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:48.786 { 00:14:48.786 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:48.786 "listen_address": { 00:14:48.786 "trtype": "rdma", 00:14:48.786 "traddr": "192.168.100.8", 00:14:48.786 "trsvcid": "4421" 00:14:48.786 }, 00:14:48.786 "method": "nvmf_subsystem_remove_listener", 00:14:48.786 "req_id": 1 00:14:48.786 } 00:14:48.786 Got JSON-RPC error response 00:14:48.786 response: 00:14:48.786 { 00:14:48.786 "code": -32602, 00:14:48.786 "message": "Invalid parameters" 00:14:48.786 }' 00:14:48.786 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:48.786 { 00:14:48.786 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:48.786 "listen_address": { 00:14:48.786 "trtype": "rdma", 00:14:48.786 "traddr": "192.168.100.8", 00:14:48.786 "trsvcid": "4421" 00:14:48.786 }, 00:14:48.786 "method": "nvmf_subsystem_remove_listener", 00:14:48.786 "req_id": 1 00:14:48.786 } 00:14:48.786 Got JSON-RPC error response 00:14:48.786 response: 00:14:48.786 { 00:14:48.786 "code": -32602, 00:14:48.786 "message": "Invalid parameters" 00:14:48.786 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:48.786 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25576 -i 0 00:14:48.786 [2024-11-06 08:51:11.794603] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25576: invalid cntlid range [0-65519] 00:14:49.046 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:49.046 { 00:14:49.046 "nqn": "nqn.2016-06.io.spdk:cnode25576", 00:14:49.046 "min_cntlid": 0, 00:14:49.046 "method": "nvmf_create_subsystem", 00:14:49.046 "req_id": 1 00:14:49.046 } 00:14:49.046 Got JSON-RPC error response 00:14:49.046 response: 00:14:49.046 { 00:14:49.046 "code": -32602, 00:14:49.046 "message": "Invalid cntlid range [0-65519]" 00:14:49.046 }' 00:14:49.046 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:49.046 { 00:14:49.046 "nqn": "nqn.2016-06.io.spdk:cnode25576", 00:14:49.046 "min_cntlid": 0, 00:14:49.046 "method": "nvmf_create_subsystem", 00:14:49.046 "req_id": 1 00:14:49.046 } 00:14:49.046 Got JSON-RPC error response 00:14:49.046 response: 00:14:49.046 { 00:14:49.046 "code": -32602, 00:14:49.046 "message": "Invalid cntlid range [0-65519]" 00:14:49.046 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.046 08:51:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14849 -i 65520 00:14:49.046 [2024-11-06 08:51:11.999341] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14849: invalid cntlid range [65520-65519] 00:14:49.046 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:49.046 { 00:14:49.046 "nqn": "nqn.2016-06.io.spdk:cnode14849", 00:14:49.046 "min_cntlid": 65520, 00:14:49.046 "method": "nvmf_create_subsystem", 00:14:49.046 "req_id": 1 00:14:49.046 } 00:14:49.046 Got JSON-RPC error response 00:14:49.046 response: 00:14:49.046 { 00:14:49.046 "code": -32602, 00:14:49.046 "message": "Invalid cntlid range [65520-65519]" 00:14:49.046 }' 00:14:49.046 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:49.046 { 00:14:49.046 "nqn": "nqn.2016-06.io.spdk:cnode14849", 00:14:49.046 "min_cntlid": 65520, 00:14:49.046 "method": "nvmf_create_subsystem", 00:14:49.046 "req_id": 1 00:14:49.046 } 00:14:49.046 Got JSON-RPC error response 00:14:49.046 response: 00:14:49.046 { 00:14:49.046 "code": -32602, 00:14:49.046 "message": "Invalid cntlid range [65520-65519]" 00:14:49.046 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.046 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21528 -I 0 00:14:49.305 [2024-11-06 08:51:12.196054] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21528: invalid cntlid range [1-0] 00:14:49.305 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:49.305 { 00:14:49.305 "nqn": "nqn.2016-06.io.spdk:cnode21528", 00:14:49.305 "max_cntlid": 0, 00:14:49.305 "method": "nvmf_create_subsystem", 00:14:49.305 "req_id": 1 00:14:49.305 } 00:14:49.305 Got JSON-RPC error response 00:14:49.305 response: 00:14:49.305 { 00:14:49.305 "code": -32602, 00:14:49.305 "message": "Invalid cntlid range [1-0]" 00:14:49.305 }' 00:14:49.305 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:49.305 { 00:14:49.305 "nqn": "nqn.2016-06.io.spdk:cnode21528", 00:14:49.305 "max_cntlid": 0, 00:14:49.305 "method": "nvmf_create_subsystem", 00:14:49.305 "req_id": 1 00:14:49.305 } 00:14:49.305 Got JSON-RPC error response 00:14:49.305 response: 00:14:49.305 { 00:14:49.305 "code": -32602, 00:14:49.305 "message": "Invalid cntlid range [1-0]" 00:14:49.305 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.305 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20190 -I 65520 00:14:49.564 [2024-11-06 08:51:12.392771] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20190: invalid cntlid range [1-65520] 00:14:49.564 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:49.564 { 00:14:49.564 "nqn": "nqn.2016-06.io.spdk:cnode20190", 00:14:49.564 "max_cntlid": 65520, 00:14:49.564 "method": "nvmf_create_subsystem", 00:14:49.564 "req_id": 1 00:14:49.564 } 00:14:49.564 Got JSON-RPC error response 00:14:49.564 response: 00:14:49.564 { 00:14:49.564 "code": -32602, 00:14:49.564 "message": "Invalid cntlid range [1-65520]" 00:14:49.564 }' 00:14:49.564 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:49.564 { 00:14:49.564 "nqn": "nqn.2016-06.io.spdk:cnode20190", 00:14:49.564 "max_cntlid": 65520, 00:14:49.564 "method": "nvmf_create_subsystem", 00:14:49.564 "req_id": 1 00:14:49.564 } 00:14:49.564 Got JSON-RPC error response 00:14:49.564 response: 00:14:49.564 { 00:14:49.564 "code": -32602, 00:14:49.564 "message": "Invalid cntlid range [1-65520]" 00:14:49.564 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.564 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10792 -i 6 -I 5 00:14:49.824 [2024-11-06 08:51:12.597512] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10792: invalid cntlid range [6-5] 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:49.824 { 00:14:49.824 "nqn": "nqn.2016-06.io.spdk:cnode10792", 00:14:49.824 "min_cntlid": 6, 00:14:49.824 "max_cntlid": 5, 00:14:49.824 "method": "nvmf_create_subsystem", 00:14:49.824 "req_id": 1 00:14:49.824 } 00:14:49.824 Got JSON-RPC error response 00:14:49.824 response: 00:14:49.824 { 00:14:49.824 "code": -32602, 00:14:49.824 "message": "Invalid cntlid range [6-5]" 00:14:49.824 }' 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:49.824 { 00:14:49.824 "nqn": "nqn.2016-06.io.spdk:cnode10792", 00:14:49.824 "min_cntlid": 6, 00:14:49.824 "max_cntlid": 5, 00:14:49.824 "method": "nvmf_create_subsystem", 00:14:49.824 "req_id": 1 00:14:49.824 } 00:14:49.824 Got JSON-RPC error response 00:14:49.824 response: 00:14:49.824 { 00:14:49.824 "code": -32602, 00:14:49.824 "message": "Invalid cntlid range [6-5]" 00:14:49.824 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:49.824 { 00:14:49.824 "name": "foobar", 00:14:49.824 "method": "nvmf_delete_target", 00:14:49.824 "req_id": 1 00:14:49.824 } 00:14:49.824 Got JSON-RPC error response 00:14:49.824 response: 00:14:49.824 { 00:14:49.824 "code": -32602, 00:14:49.824 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:49.824 }' 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:49.824 { 00:14:49.824 "name": "foobar", 00:14:49.824 "method": "nvmf_delete_target", 00:14:49.824 "req_id": 1 00:14:49.824 } 00:14:49.824 Got JSON-RPC error response 00:14:49.824 response: 00:14:49.824 { 00:14:49.824 "code": -32602, 00:14:49.824 "message": "The specified target doesn't exist, cannot delete it." 00:14:49.824 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:49.824 rmmod nvme_rdma 00:14:49.824 rmmod nvme_fabrics 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 404096 ']' 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 404096 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 404096 ']' 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 404096 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.824 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 404096 00:14:50.084 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:50.084 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:50.084 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 404096' 00:14:50.084 killing process with pid 404096 00:14:50.084 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 404096 00:14:50.084 08:51:12 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 404096 00:14:50.084 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:50.084 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:14:50.084 00:14:50.084 real 0m10.066s 00:14:50.084 user 0m19.304s 00:14:50.084 sys 0m5.359s 00:14:50.084 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.084 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:50.084 ************************************ 00:14:50.084 END TEST nvmf_invalid 00:14:50.084 ************************************ 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.344 ************************************ 00:14:50.344 START TEST nvmf_connect_stress 00:14:50.344 ************************************ 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:50.344 * Looking for test storage... 00:14:50.344 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:50.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.344 --rc genhtml_branch_coverage=1 00:14:50.344 --rc genhtml_function_coverage=1 00:14:50.344 --rc genhtml_legend=1 00:14:50.344 --rc geninfo_all_blocks=1 00:14:50.344 --rc geninfo_unexecuted_blocks=1 00:14:50.344 00:14:50.344 ' 00:14:50.344 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:50.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.344 --rc genhtml_branch_coverage=1 00:14:50.344 --rc genhtml_function_coverage=1 00:14:50.344 --rc genhtml_legend=1 00:14:50.344 --rc geninfo_all_blocks=1 00:14:50.344 --rc geninfo_unexecuted_blocks=1 00:14:50.344 00:14:50.344 ' 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:50.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.345 --rc genhtml_branch_coverage=1 00:14:50.345 --rc genhtml_function_coverage=1 00:14:50.345 --rc genhtml_legend=1 00:14:50.345 --rc geninfo_all_blocks=1 00:14:50.345 --rc geninfo_unexecuted_blocks=1 00:14:50.345 00:14:50.345 ' 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:50.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.345 --rc genhtml_branch_coverage=1 00:14:50.345 --rc genhtml_function_coverage=1 00:14:50.345 --rc genhtml_legend=1 00:14:50.345 --rc geninfo_all_blocks=1 00:14:50.345 --rc geninfo_unexecuted_blocks=1 00:14:50.345 00:14:50.345 ' 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.345 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.345 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.605 08:51:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.179 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:57.180 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:57.180 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:57.180 Found net devices under 0000:da:00.0: mlx_0_0 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:57.180 Found net devices under 0000:da:00.1: mlx_0_1 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # rdma_device_init 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:57.180 08:51:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@528 -- # allocate_nic_ips 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:57.180 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:57.180 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.180 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:14:57.180 altname enp218s0f0np0 00:14:57.181 altname ens818f0np0 00:14:57.181 inet 192.168.100.8/24 scope global mlx_0_0 00:14:57.181 valid_lft forever preferred_lft forever 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:57.181 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.181 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:14:57.181 altname enp218s0f1np1 00:14:57.181 altname ens818f1np1 00:14:57.181 inet 192.168.100.9/24 scope global mlx_0_1 00:14:57.181 valid_lft forever preferred_lft forever 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:14:57.181 192.168.100.9' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:14:57.181 192.168.100.9' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # head -n 1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:14:57.181 192.168.100.9' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # tail -n +2 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # head -n 1 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=408137 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 408137 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 408137 ']' 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.181 [2024-11-06 08:51:19.250876] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:14:57.181 [2024-11-06 08:51:19.250929] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.181 [2024-11-06 08:51:19.326891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:57.181 [2024-11-06 08:51:19.367560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.181 [2024-11-06 08:51:19.367595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.181 [2024-11-06 08:51:19.367602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.181 [2024-11-06 08:51:19.367608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.181 [2024-11-06 08:51:19.367612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.181 [2024-11-06 08:51:19.369052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.181 [2024-11-06 08:51:19.369138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.181 [2024-11-06 08:51:19.369139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.181 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.181 [2024-11-06 08:51:19.534359] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c36530/0x1c3aa20) succeed. 00:14:57.182 [2024-11-06 08:51:19.543396] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c37b20/0x1c7c0c0) succeed. 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.182 [2024-11-06 08:51:19.660019] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.182 NULL1 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=408215 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.182 08:51:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.182 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.182 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:57.182 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.182 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.182 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.441 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.441 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:57.441 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.441 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.441 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.009 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.009 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:58.009 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.009 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.009 08:51:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.268 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.268 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:58.268 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.268 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.268 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.527 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.527 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:58.527 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.527 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.527 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.787 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.787 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:58.787 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.787 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.787 08:51:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.045 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.045 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:59.045 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.045 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.045 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.614 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.614 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:59.614 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.614 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.614 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.873 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.873 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:14:59.873 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.873 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.873 08:51:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.132 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.132 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:00.132 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.132 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.132 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.392 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.392 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:00.392 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.392 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.392 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.960 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.960 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:00.960 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.960 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.960 08:51:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.219 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.219 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:01.219 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.219 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.219 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.478 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.478 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:01.478 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.478 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.478 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.737 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.737 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:01.737 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.737 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.737 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.996 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.996 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:01.996 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.996 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.996 08:51:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.565 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.565 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:02.565 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.565 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.565 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.824 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.824 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:02.824 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.824 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.824 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.082 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.082 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:03.082 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.082 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.082 08:51:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.341 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.341 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:03.341 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.341 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.341 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.600 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.600 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:03.600 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.600 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.600 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.167 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.167 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:04.167 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.167 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.167 08:51:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.425 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.425 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:04.425 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.425 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.425 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.684 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.684 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:04.684 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.684 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.684 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.943 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.943 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:04.943 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.943 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.943 08:51:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.511 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.511 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:05.511 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.511 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.511 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.771 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.771 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:05.771 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.771 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.771 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.029 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.029 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:06.029 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.029 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.029 08:51:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.289 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:06.289 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.289 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.289 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.866 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.866 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:06.866 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.866 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.866 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.126 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.126 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:07.126 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.126 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.126 08:51:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.126 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 408215 00:15:07.385 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (408215) - No such process 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 408215 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:07.385 rmmod nvme_rdma 00:15:07.385 rmmod nvme_fabrics 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 408137 ']' 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 408137 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 408137 ']' 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 408137 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 408137 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 408137' 00:15:07.385 killing process with pid 408137 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 408137 00:15:07.385 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 408137 00:15:07.644 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:07.644 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:15:07.644 00:15:07.644 real 0m17.404s 00:15:07.644 user 0m41.253s 00:15:07.644 sys 0m6.313s 00:15:07.644 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.644 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.644 ************************************ 00:15:07.644 END TEST nvmf_connect_stress 00:15:07.644 ************************************ 00:15:07.644 08:51:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:15:07.644 08:51:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:07.644 08:51:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.644 08:51:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.644 ************************************ 00:15:07.644 START TEST nvmf_fused_ordering 00:15:07.644 ************************************ 00:15:07.644 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:15:07.904 * Looking for test storage... 00:15:07.904 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # lcov --version 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:07.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.904 --rc genhtml_branch_coverage=1 00:15:07.904 --rc genhtml_function_coverage=1 00:15:07.904 --rc genhtml_legend=1 00:15:07.904 --rc geninfo_all_blocks=1 00:15:07.904 --rc geninfo_unexecuted_blocks=1 00:15:07.904 00:15:07.904 ' 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:07.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.904 --rc genhtml_branch_coverage=1 00:15:07.904 --rc genhtml_function_coverage=1 00:15:07.904 --rc genhtml_legend=1 00:15:07.904 --rc geninfo_all_blocks=1 00:15:07.904 --rc geninfo_unexecuted_blocks=1 00:15:07.904 00:15:07.904 ' 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:07.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.904 --rc genhtml_branch_coverage=1 00:15:07.904 --rc genhtml_function_coverage=1 00:15:07.904 --rc genhtml_legend=1 00:15:07.904 --rc geninfo_all_blocks=1 00:15:07.904 --rc geninfo_unexecuted_blocks=1 00:15:07.904 00:15:07.904 ' 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:07.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.904 --rc genhtml_branch_coverage=1 00:15:07.904 --rc genhtml_function_coverage=1 00:15:07.904 --rc genhtml_legend=1 00:15:07.904 --rc geninfo_all_blocks=1 00:15:07.904 --rc geninfo_unexecuted_blocks=1 00:15:07.904 00:15:07.904 ' 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.904 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:07.905 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:07.905 08:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:14.478 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:15:14.479 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:15:14.479 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:15:14.479 Found net devices under 0000:da:00.0: mlx_0_0 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:15:14.479 Found net devices under 0000:da:00.1: mlx_0_1 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # rdma_device_init 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@528 -- # allocate_nic_ips 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:14.479 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:14.479 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:14.480 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:15:14.480 altname enp218s0f0np0 00:15:14.480 altname ens818f0np0 00:15:14.480 inet 192.168.100.8/24 scope global mlx_0_0 00:15:14.480 valid_lft forever preferred_lft forever 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:14.480 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:14.480 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:15:14.480 altname enp218s0f1np1 00:15:14.480 altname ens818f1np1 00:15:14.480 inet 192.168.100.9/24 scope global mlx_0_1 00:15:14.480 valid_lft forever preferred_lft forever 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:15:14.480 192.168.100.9' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:15:14.480 192.168.100.9' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # head -n 1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:15:14.480 192.168.100.9' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # tail -n +2 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # head -n 1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=413241 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 413241 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 413241 ']' 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.480 [2024-11-06 08:51:36.786262] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:15:14.480 [2024-11-06 08:51:36.786306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.480 [2024-11-06 08:51:36.859522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.480 [2024-11-06 08:51:36.900112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.480 [2024-11-06 08:51:36.900145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.480 [2024-11-06 08:51:36.900152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.480 [2024-11-06 08:51:36.900158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.480 [2024-11-06 08:51:36.900162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.480 [2024-11-06 08:51:36.900745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:14.480 08:51:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.480 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.481 [2024-11-06 08:51:37.056694] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1095ea0/0x109a390) succeed. 00:15:14.481 [2024-11-06 08:51:37.065784] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1097350/0x10dba30) succeed. 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.481 [2024-11-06 08:51:37.111592] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.481 NULL1 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.481 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:14.481 [2024-11-06 08:51:37.169110] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:15:14.481 [2024-11-06 08:51:37.169141] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413327 ] 00:15:14.481 Attached to nqn.2016-06.io.spdk:cnode1 00:15:14.481 Namespace ID: 1 size: 1GB 00:15:14.481 fused_ordering(0) 00:15:14.481 fused_ordering(1) 00:15:14.481 fused_ordering(2) 00:15:14.481 fused_ordering(3) 00:15:14.481 fused_ordering(4) 00:15:14.481 fused_ordering(5) 00:15:14.481 fused_ordering(6) 00:15:14.481 fused_ordering(7) 00:15:14.481 fused_ordering(8) 00:15:14.481 fused_ordering(9) 00:15:14.481 fused_ordering(10) 00:15:14.481 fused_ordering(11) 00:15:14.481 fused_ordering(12) 00:15:14.481 fused_ordering(13) 00:15:14.481 fused_ordering(14) 00:15:14.481 fused_ordering(15) 00:15:14.481 fused_ordering(16) 00:15:14.481 fused_ordering(17) 00:15:14.481 fused_ordering(18) 00:15:14.481 fused_ordering(19) 00:15:14.481 fused_ordering(20) 00:15:14.481 fused_ordering(21) 00:15:14.481 fused_ordering(22) 00:15:14.481 fused_ordering(23) 00:15:14.481 fused_ordering(24) 00:15:14.481 fused_ordering(25) 00:15:14.481 fused_ordering(26) 00:15:14.481 fused_ordering(27) 00:15:14.481 fused_ordering(28) 00:15:14.481 fused_ordering(29) 00:15:14.481 fused_ordering(30) 00:15:14.481 fused_ordering(31) 00:15:14.481 fused_ordering(32) 00:15:14.481 fused_ordering(33) 00:15:14.481 fused_ordering(34) 00:15:14.481 fused_ordering(35) 00:15:14.481 fused_ordering(36) 00:15:14.481 fused_ordering(37) 00:15:14.481 fused_ordering(38) 00:15:14.481 fused_ordering(39) 00:15:14.481 fused_ordering(40) 00:15:14.481 fused_ordering(41) 00:15:14.481 fused_ordering(42) 00:15:14.481 fused_ordering(43) 00:15:14.481 fused_ordering(44) 00:15:14.481 fused_ordering(45) 00:15:14.481 fused_ordering(46) 00:15:14.481 fused_ordering(47) 00:15:14.481 fused_ordering(48) 00:15:14.481 fused_ordering(49) 00:15:14.481 fused_ordering(50) 00:15:14.481 fused_ordering(51) 00:15:14.481 fused_ordering(52) 00:15:14.481 fused_ordering(53) 00:15:14.481 fused_ordering(54) 00:15:14.481 fused_ordering(55) 00:15:14.481 fused_ordering(56) 00:15:14.481 fused_ordering(57) 00:15:14.481 fused_ordering(58) 00:15:14.481 fused_ordering(59) 00:15:14.481 fused_ordering(60) 00:15:14.481 fused_ordering(61) 00:15:14.481 fused_ordering(62) 00:15:14.481 fused_ordering(63) 00:15:14.481 fused_ordering(64) 00:15:14.481 fused_ordering(65) 00:15:14.481 fused_ordering(66) 00:15:14.481 fused_ordering(67) 00:15:14.481 fused_ordering(68) 00:15:14.481 fused_ordering(69) 00:15:14.481 fused_ordering(70) 00:15:14.481 fused_ordering(71) 00:15:14.481 fused_ordering(72) 00:15:14.481 fused_ordering(73) 00:15:14.481 fused_ordering(74) 00:15:14.481 fused_ordering(75) 00:15:14.481 fused_ordering(76) 00:15:14.481 fused_ordering(77) 00:15:14.481 fused_ordering(78) 00:15:14.481 fused_ordering(79) 00:15:14.481 fused_ordering(80) 00:15:14.481 fused_ordering(81) 00:15:14.481 fused_ordering(82) 00:15:14.481 fused_ordering(83) 00:15:14.481 fused_ordering(84) 00:15:14.481 fused_ordering(85) 00:15:14.481 fused_ordering(86) 00:15:14.481 fused_ordering(87) 00:15:14.481 fused_ordering(88) 00:15:14.481 fused_ordering(89) 00:15:14.481 fused_ordering(90) 00:15:14.481 fused_ordering(91) 00:15:14.481 fused_ordering(92) 00:15:14.481 fused_ordering(93) 00:15:14.481 fused_ordering(94) 00:15:14.481 fused_ordering(95) 00:15:14.481 fused_ordering(96) 00:15:14.481 fused_ordering(97) 00:15:14.481 fused_ordering(98) 00:15:14.481 fused_ordering(99) 00:15:14.481 fused_ordering(100) 00:15:14.481 fused_ordering(101) 00:15:14.481 fused_ordering(102) 00:15:14.481 fused_ordering(103) 00:15:14.481 fused_ordering(104) 00:15:14.481 fused_ordering(105) 00:15:14.481 fused_ordering(106) 00:15:14.481 fused_ordering(107) 00:15:14.481 fused_ordering(108) 00:15:14.481 fused_ordering(109) 00:15:14.481 fused_ordering(110) 00:15:14.481 fused_ordering(111) 00:15:14.481 fused_ordering(112) 00:15:14.481 fused_ordering(113) 00:15:14.481 fused_ordering(114) 00:15:14.481 fused_ordering(115) 00:15:14.481 fused_ordering(116) 00:15:14.481 fused_ordering(117) 00:15:14.481 fused_ordering(118) 00:15:14.481 fused_ordering(119) 00:15:14.481 fused_ordering(120) 00:15:14.481 fused_ordering(121) 00:15:14.481 fused_ordering(122) 00:15:14.481 fused_ordering(123) 00:15:14.481 fused_ordering(124) 00:15:14.481 fused_ordering(125) 00:15:14.481 fused_ordering(126) 00:15:14.481 fused_ordering(127) 00:15:14.481 fused_ordering(128) 00:15:14.481 fused_ordering(129) 00:15:14.481 fused_ordering(130) 00:15:14.481 fused_ordering(131) 00:15:14.481 fused_ordering(132) 00:15:14.481 fused_ordering(133) 00:15:14.481 fused_ordering(134) 00:15:14.481 fused_ordering(135) 00:15:14.481 fused_ordering(136) 00:15:14.481 fused_ordering(137) 00:15:14.481 fused_ordering(138) 00:15:14.481 fused_ordering(139) 00:15:14.481 fused_ordering(140) 00:15:14.481 fused_ordering(141) 00:15:14.481 fused_ordering(142) 00:15:14.481 fused_ordering(143) 00:15:14.481 fused_ordering(144) 00:15:14.481 fused_ordering(145) 00:15:14.481 fused_ordering(146) 00:15:14.481 fused_ordering(147) 00:15:14.481 fused_ordering(148) 00:15:14.481 fused_ordering(149) 00:15:14.481 fused_ordering(150) 00:15:14.481 fused_ordering(151) 00:15:14.481 fused_ordering(152) 00:15:14.481 fused_ordering(153) 00:15:14.481 fused_ordering(154) 00:15:14.481 fused_ordering(155) 00:15:14.481 fused_ordering(156) 00:15:14.481 fused_ordering(157) 00:15:14.481 fused_ordering(158) 00:15:14.481 fused_ordering(159) 00:15:14.481 fused_ordering(160) 00:15:14.481 fused_ordering(161) 00:15:14.481 fused_ordering(162) 00:15:14.481 fused_ordering(163) 00:15:14.481 fused_ordering(164) 00:15:14.481 fused_ordering(165) 00:15:14.481 fused_ordering(166) 00:15:14.481 fused_ordering(167) 00:15:14.481 fused_ordering(168) 00:15:14.481 fused_ordering(169) 00:15:14.481 fused_ordering(170) 00:15:14.481 fused_ordering(171) 00:15:14.481 fused_ordering(172) 00:15:14.481 fused_ordering(173) 00:15:14.481 fused_ordering(174) 00:15:14.481 fused_ordering(175) 00:15:14.481 fused_ordering(176) 00:15:14.481 fused_ordering(177) 00:15:14.481 fused_ordering(178) 00:15:14.481 fused_ordering(179) 00:15:14.481 fused_ordering(180) 00:15:14.481 fused_ordering(181) 00:15:14.481 fused_ordering(182) 00:15:14.481 fused_ordering(183) 00:15:14.481 fused_ordering(184) 00:15:14.481 fused_ordering(185) 00:15:14.481 fused_ordering(186) 00:15:14.481 fused_ordering(187) 00:15:14.481 fused_ordering(188) 00:15:14.481 fused_ordering(189) 00:15:14.481 fused_ordering(190) 00:15:14.481 fused_ordering(191) 00:15:14.481 fused_ordering(192) 00:15:14.482 fused_ordering(193) 00:15:14.482 fused_ordering(194) 00:15:14.482 fused_ordering(195) 00:15:14.482 fused_ordering(196) 00:15:14.482 fused_ordering(197) 00:15:14.482 fused_ordering(198) 00:15:14.482 fused_ordering(199) 00:15:14.482 fused_ordering(200) 00:15:14.482 fused_ordering(201) 00:15:14.482 fused_ordering(202) 00:15:14.482 fused_ordering(203) 00:15:14.482 fused_ordering(204) 00:15:14.482 fused_ordering(205) 00:15:14.482 fused_ordering(206) 00:15:14.482 fused_ordering(207) 00:15:14.482 fused_ordering(208) 00:15:14.482 fused_ordering(209) 00:15:14.482 fused_ordering(210) 00:15:14.482 fused_ordering(211) 00:15:14.482 fused_ordering(212) 00:15:14.482 fused_ordering(213) 00:15:14.482 fused_ordering(214) 00:15:14.482 fused_ordering(215) 00:15:14.482 fused_ordering(216) 00:15:14.482 fused_ordering(217) 00:15:14.482 fused_ordering(218) 00:15:14.482 fused_ordering(219) 00:15:14.482 fused_ordering(220) 00:15:14.482 fused_ordering(221) 00:15:14.482 fused_ordering(222) 00:15:14.482 fused_ordering(223) 00:15:14.482 fused_ordering(224) 00:15:14.482 fused_ordering(225) 00:15:14.482 fused_ordering(226) 00:15:14.482 fused_ordering(227) 00:15:14.482 fused_ordering(228) 00:15:14.482 fused_ordering(229) 00:15:14.482 fused_ordering(230) 00:15:14.482 fused_ordering(231) 00:15:14.482 fused_ordering(232) 00:15:14.482 fused_ordering(233) 00:15:14.482 fused_ordering(234) 00:15:14.482 fused_ordering(235) 00:15:14.482 fused_ordering(236) 00:15:14.482 fused_ordering(237) 00:15:14.482 fused_ordering(238) 00:15:14.482 fused_ordering(239) 00:15:14.482 fused_ordering(240) 00:15:14.482 fused_ordering(241) 00:15:14.482 fused_ordering(242) 00:15:14.482 fused_ordering(243) 00:15:14.482 fused_ordering(244) 00:15:14.482 fused_ordering(245) 00:15:14.482 fused_ordering(246) 00:15:14.482 fused_ordering(247) 00:15:14.482 fused_ordering(248) 00:15:14.482 fused_ordering(249) 00:15:14.482 fused_ordering(250) 00:15:14.482 fused_ordering(251) 00:15:14.482 fused_ordering(252) 00:15:14.482 fused_ordering(253) 00:15:14.482 fused_ordering(254) 00:15:14.482 fused_ordering(255) 00:15:14.482 fused_ordering(256) 00:15:14.482 fused_ordering(257) 00:15:14.482 fused_ordering(258) 00:15:14.482 fused_ordering(259) 00:15:14.482 fused_ordering(260) 00:15:14.482 fused_ordering(261) 00:15:14.482 fused_ordering(262) 00:15:14.482 fused_ordering(263) 00:15:14.482 fused_ordering(264) 00:15:14.482 fused_ordering(265) 00:15:14.482 fused_ordering(266) 00:15:14.482 fused_ordering(267) 00:15:14.482 fused_ordering(268) 00:15:14.482 fused_ordering(269) 00:15:14.482 fused_ordering(270) 00:15:14.482 fused_ordering(271) 00:15:14.482 fused_ordering(272) 00:15:14.482 fused_ordering(273) 00:15:14.482 fused_ordering(274) 00:15:14.482 fused_ordering(275) 00:15:14.482 fused_ordering(276) 00:15:14.482 fused_ordering(277) 00:15:14.482 fused_ordering(278) 00:15:14.482 fused_ordering(279) 00:15:14.482 fused_ordering(280) 00:15:14.482 fused_ordering(281) 00:15:14.482 fused_ordering(282) 00:15:14.482 fused_ordering(283) 00:15:14.482 fused_ordering(284) 00:15:14.482 fused_ordering(285) 00:15:14.482 fused_ordering(286) 00:15:14.482 fused_ordering(287) 00:15:14.482 fused_ordering(288) 00:15:14.482 fused_ordering(289) 00:15:14.482 fused_ordering(290) 00:15:14.482 fused_ordering(291) 00:15:14.482 fused_ordering(292) 00:15:14.482 fused_ordering(293) 00:15:14.482 fused_ordering(294) 00:15:14.482 fused_ordering(295) 00:15:14.482 fused_ordering(296) 00:15:14.482 fused_ordering(297) 00:15:14.482 fused_ordering(298) 00:15:14.482 fused_ordering(299) 00:15:14.482 fused_ordering(300) 00:15:14.482 fused_ordering(301) 00:15:14.482 fused_ordering(302) 00:15:14.482 fused_ordering(303) 00:15:14.482 fused_ordering(304) 00:15:14.482 fused_ordering(305) 00:15:14.482 fused_ordering(306) 00:15:14.482 fused_ordering(307) 00:15:14.482 fused_ordering(308) 00:15:14.482 fused_ordering(309) 00:15:14.482 fused_ordering(310) 00:15:14.482 fused_ordering(311) 00:15:14.482 fused_ordering(312) 00:15:14.482 fused_ordering(313) 00:15:14.482 fused_ordering(314) 00:15:14.482 fused_ordering(315) 00:15:14.482 fused_ordering(316) 00:15:14.482 fused_ordering(317) 00:15:14.482 fused_ordering(318) 00:15:14.482 fused_ordering(319) 00:15:14.482 fused_ordering(320) 00:15:14.482 fused_ordering(321) 00:15:14.482 fused_ordering(322) 00:15:14.482 fused_ordering(323) 00:15:14.482 fused_ordering(324) 00:15:14.482 fused_ordering(325) 00:15:14.482 fused_ordering(326) 00:15:14.482 fused_ordering(327) 00:15:14.482 fused_ordering(328) 00:15:14.482 fused_ordering(329) 00:15:14.482 fused_ordering(330) 00:15:14.482 fused_ordering(331) 00:15:14.482 fused_ordering(332) 00:15:14.482 fused_ordering(333) 00:15:14.482 fused_ordering(334) 00:15:14.482 fused_ordering(335) 00:15:14.482 fused_ordering(336) 00:15:14.482 fused_ordering(337) 00:15:14.482 fused_ordering(338) 00:15:14.482 fused_ordering(339) 00:15:14.482 fused_ordering(340) 00:15:14.482 fused_ordering(341) 00:15:14.482 fused_ordering(342) 00:15:14.482 fused_ordering(343) 00:15:14.482 fused_ordering(344) 00:15:14.482 fused_ordering(345) 00:15:14.482 fused_ordering(346) 00:15:14.482 fused_ordering(347) 00:15:14.482 fused_ordering(348) 00:15:14.482 fused_ordering(349) 00:15:14.482 fused_ordering(350) 00:15:14.482 fused_ordering(351) 00:15:14.482 fused_ordering(352) 00:15:14.482 fused_ordering(353) 00:15:14.482 fused_ordering(354) 00:15:14.482 fused_ordering(355) 00:15:14.482 fused_ordering(356) 00:15:14.482 fused_ordering(357) 00:15:14.482 fused_ordering(358) 00:15:14.482 fused_ordering(359) 00:15:14.482 fused_ordering(360) 00:15:14.482 fused_ordering(361) 00:15:14.482 fused_ordering(362) 00:15:14.482 fused_ordering(363) 00:15:14.482 fused_ordering(364) 00:15:14.482 fused_ordering(365) 00:15:14.482 fused_ordering(366) 00:15:14.482 fused_ordering(367) 00:15:14.482 fused_ordering(368) 00:15:14.482 fused_ordering(369) 00:15:14.482 fused_ordering(370) 00:15:14.482 fused_ordering(371) 00:15:14.482 fused_ordering(372) 00:15:14.482 fused_ordering(373) 00:15:14.482 fused_ordering(374) 00:15:14.482 fused_ordering(375) 00:15:14.482 fused_ordering(376) 00:15:14.482 fused_ordering(377) 00:15:14.482 fused_ordering(378) 00:15:14.482 fused_ordering(379) 00:15:14.482 fused_ordering(380) 00:15:14.482 fused_ordering(381) 00:15:14.482 fused_ordering(382) 00:15:14.482 fused_ordering(383) 00:15:14.482 fused_ordering(384) 00:15:14.482 fused_ordering(385) 00:15:14.482 fused_ordering(386) 00:15:14.482 fused_ordering(387) 00:15:14.482 fused_ordering(388) 00:15:14.482 fused_ordering(389) 00:15:14.482 fused_ordering(390) 00:15:14.482 fused_ordering(391) 00:15:14.482 fused_ordering(392) 00:15:14.482 fused_ordering(393) 00:15:14.482 fused_ordering(394) 00:15:14.482 fused_ordering(395) 00:15:14.482 fused_ordering(396) 00:15:14.482 fused_ordering(397) 00:15:14.482 fused_ordering(398) 00:15:14.482 fused_ordering(399) 00:15:14.482 fused_ordering(400) 00:15:14.482 fused_ordering(401) 00:15:14.482 fused_ordering(402) 00:15:14.482 fused_ordering(403) 00:15:14.482 fused_ordering(404) 00:15:14.482 fused_ordering(405) 00:15:14.482 fused_ordering(406) 00:15:14.482 fused_ordering(407) 00:15:14.482 fused_ordering(408) 00:15:14.482 fused_ordering(409) 00:15:14.482 fused_ordering(410) 00:15:14.742 fused_ordering(411) 00:15:14.742 fused_ordering(412) 00:15:14.742 fused_ordering(413) 00:15:14.742 fused_ordering(414) 00:15:14.742 fused_ordering(415) 00:15:14.742 fused_ordering(416) 00:15:14.742 fused_ordering(417) 00:15:14.742 fused_ordering(418) 00:15:14.742 fused_ordering(419) 00:15:14.742 fused_ordering(420) 00:15:14.742 fused_ordering(421) 00:15:14.742 fused_ordering(422) 00:15:14.742 fused_ordering(423) 00:15:14.742 fused_ordering(424) 00:15:14.742 fused_ordering(425) 00:15:14.742 fused_ordering(426) 00:15:14.742 fused_ordering(427) 00:15:14.742 fused_ordering(428) 00:15:14.742 fused_ordering(429) 00:15:14.742 fused_ordering(430) 00:15:14.742 fused_ordering(431) 00:15:14.742 fused_ordering(432) 00:15:14.742 fused_ordering(433) 00:15:14.742 fused_ordering(434) 00:15:14.742 fused_ordering(435) 00:15:14.742 fused_ordering(436) 00:15:14.742 fused_ordering(437) 00:15:14.742 fused_ordering(438) 00:15:14.742 fused_ordering(439) 00:15:14.742 fused_ordering(440) 00:15:14.742 fused_ordering(441) 00:15:14.742 fused_ordering(442) 00:15:14.742 fused_ordering(443) 00:15:14.742 fused_ordering(444) 00:15:14.742 fused_ordering(445) 00:15:14.742 fused_ordering(446) 00:15:14.742 fused_ordering(447) 00:15:14.742 fused_ordering(448) 00:15:14.742 fused_ordering(449) 00:15:14.742 fused_ordering(450) 00:15:14.742 fused_ordering(451) 00:15:14.742 fused_ordering(452) 00:15:14.742 fused_ordering(453) 00:15:14.742 fused_ordering(454) 00:15:14.742 fused_ordering(455) 00:15:14.742 fused_ordering(456) 00:15:14.742 fused_ordering(457) 00:15:14.742 fused_ordering(458) 00:15:14.742 fused_ordering(459) 00:15:14.742 fused_ordering(460) 00:15:14.742 fused_ordering(461) 00:15:14.742 fused_ordering(462) 00:15:14.742 fused_ordering(463) 00:15:14.742 fused_ordering(464) 00:15:14.742 fused_ordering(465) 00:15:14.742 fused_ordering(466) 00:15:14.742 fused_ordering(467) 00:15:14.742 fused_ordering(468) 00:15:14.742 fused_ordering(469) 00:15:14.742 fused_ordering(470) 00:15:14.742 fused_ordering(471) 00:15:14.742 fused_ordering(472) 00:15:14.742 fused_ordering(473) 00:15:14.742 fused_ordering(474) 00:15:14.742 fused_ordering(475) 00:15:14.742 fused_ordering(476) 00:15:14.742 fused_ordering(477) 00:15:14.742 fused_ordering(478) 00:15:14.742 fused_ordering(479) 00:15:14.742 fused_ordering(480) 00:15:14.742 fused_ordering(481) 00:15:14.742 fused_ordering(482) 00:15:14.742 fused_ordering(483) 00:15:14.742 fused_ordering(484) 00:15:14.742 fused_ordering(485) 00:15:14.742 fused_ordering(486) 00:15:14.742 fused_ordering(487) 00:15:14.742 fused_ordering(488) 00:15:14.742 fused_ordering(489) 00:15:14.742 fused_ordering(490) 00:15:14.742 fused_ordering(491) 00:15:14.742 fused_ordering(492) 00:15:14.742 fused_ordering(493) 00:15:14.742 fused_ordering(494) 00:15:14.742 fused_ordering(495) 00:15:14.742 fused_ordering(496) 00:15:14.742 fused_ordering(497) 00:15:14.742 fused_ordering(498) 00:15:14.742 fused_ordering(499) 00:15:14.742 fused_ordering(500) 00:15:14.742 fused_ordering(501) 00:15:14.742 fused_ordering(502) 00:15:14.742 fused_ordering(503) 00:15:14.742 fused_ordering(504) 00:15:14.742 fused_ordering(505) 00:15:14.742 fused_ordering(506) 00:15:14.742 fused_ordering(507) 00:15:14.742 fused_ordering(508) 00:15:14.742 fused_ordering(509) 00:15:14.742 fused_ordering(510) 00:15:14.742 fused_ordering(511) 00:15:14.742 fused_ordering(512) 00:15:14.742 fused_ordering(513) 00:15:14.742 fused_ordering(514) 00:15:14.742 fused_ordering(515) 00:15:14.742 fused_ordering(516) 00:15:14.742 fused_ordering(517) 00:15:14.742 fused_ordering(518) 00:15:14.742 fused_ordering(519) 00:15:14.742 fused_ordering(520) 00:15:14.742 fused_ordering(521) 00:15:14.742 fused_ordering(522) 00:15:14.742 fused_ordering(523) 00:15:14.742 fused_ordering(524) 00:15:14.742 fused_ordering(525) 00:15:14.742 fused_ordering(526) 00:15:14.742 fused_ordering(527) 00:15:14.742 fused_ordering(528) 00:15:14.742 fused_ordering(529) 00:15:14.742 fused_ordering(530) 00:15:14.742 fused_ordering(531) 00:15:14.742 fused_ordering(532) 00:15:14.742 fused_ordering(533) 00:15:14.742 fused_ordering(534) 00:15:14.742 fused_ordering(535) 00:15:14.742 fused_ordering(536) 00:15:14.742 fused_ordering(537) 00:15:14.742 fused_ordering(538) 00:15:14.742 fused_ordering(539) 00:15:14.742 fused_ordering(540) 00:15:14.742 fused_ordering(541) 00:15:14.742 fused_ordering(542) 00:15:14.742 fused_ordering(543) 00:15:14.742 fused_ordering(544) 00:15:14.742 fused_ordering(545) 00:15:14.742 fused_ordering(546) 00:15:14.742 fused_ordering(547) 00:15:14.742 fused_ordering(548) 00:15:14.742 fused_ordering(549) 00:15:14.742 fused_ordering(550) 00:15:14.742 fused_ordering(551) 00:15:14.742 fused_ordering(552) 00:15:14.742 fused_ordering(553) 00:15:14.742 fused_ordering(554) 00:15:14.742 fused_ordering(555) 00:15:14.742 fused_ordering(556) 00:15:14.742 fused_ordering(557) 00:15:14.742 fused_ordering(558) 00:15:14.742 fused_ordering(559) 00:15:14.742 fused_ordering(560) 00:15:14.742 fused_ordering(561) 00:15:14.742 fused_ordering(562) 00:15:14.742 fused_ordering(563) 00:15:14.742 fused_ordering(564) 00:15:14.742 fused_ordering(565) 00:15:14.742 fused_ordering(566) 00:15:14.742 fused_ordering(567) 00:15:14.742 fused_ordering(568) 00:15:14.742 fused_ordering(569) 00:15:14.742 fused_ordering(570) 00:15:14.743 fused_ordering(571) 00:15:14.743 fused_ordering(572) 00:15:14.743 fused_ordering(573) 00:15:14.743 fused_ordering(574) 00:15:14.743 fused_ordering(575) 00:15:14.743 fused_ordering(576) 00:15:14.743 fused_ordering(577) 00:15:14.743 fused_ordering(578) 00:15:14.743 fused_ordering(579) 00:15:14.743 fused_ordering(580) 00:15:14.743 fused_ordering(581) 00:15:14.743 fused_ordering(582) 00:15:14.743 fused_ordering(583) 00:15:14.743 fused_ordering(584) 00:15:14.743 fused_ordering(585) 00:15:14.743 fused_ordering(586) 00:15:14.743 fused_ordering(587) 00:15:14.743 fused_ordering(588) 00:15:14.743 fused_ordering(589) 00:15:14.743 fused_ordering(590) 00:15:14.743 fused_ordering(591) 00:15:14.743 fused_ordering(592) 00:15:14.743 fused_ordering(593) 00:15:14.743 fused_ordering(594) 00:15:14.743 fused_ordering(595) 00:15:14.743 fused_ordering(596) 00:15:14.743 fused_ordering(597) 00:15:14.743 fused_ordering(598) 00:15:14.743 fused_ordering(599) 00:15:14.743 fused_ordering(600) 00:15:14.743 fused_ordering(601) 00:15:14.743 fused_ordering(602) 00:15:14.743 fused_ordering(603) 00:15:14.743 fused_ordering(604) 00:15:14.743 fused_ordering(605) 00:15:14.743 fused_ordering(606) 00:15:14.743 fused_ordering(607) 00:15:14.743 fused_ordering(608) 00:15:14.743 fused_ordering(609) 00:15:14.743 fused_ordering(610) 00:15:14.743 fused_ordering(611) 00:15:14.743 fused_ordering(612) 00:15:14.743 fused_ordering(613) 00:15:14.743 fused_ordering(614) 00:15:14.743 fused_ordering(615) 00:15:14.743 fused_ordering(616) 00:15:14.743 fused_ordering(617) 00:15:14.743 fused_ordering(618) 00:15:14.743 fused_ordering(619) 00:15:14.743 fused_ordering(620) 00:15:14.743 fused_ordering(621) 00:15:14.743 fused_ordering(622) 00:15:14.743 fused_ordering(623) 00:15:14.743 fused_ordering(624) 00:15:14.743 fused_ordering(625) 00:15:14.743 fused_ordering(626) 00:15:14.743 fused_ordering(627) 00:15:14.743 fused_ordering(628) 00:15:14.743 fused_ordering(629) 00:15:14.743 fused_ordering(630) 00:15:14.743 fused_ordering(631) 00:15:14.743 fused_ordering(632) 00:15:14.743 fused_ordering(633) 00:15:14.743 fused_ordering(634) 00:15:14.743 fused_ordering(635) 00:15:14.743 fused_ordering(636) 00:15:14.743 fused_ordering(637) 00:15:14.743 fused_ordering(638) 00:15:14.743 fused_ordering(639) 00:15:14.743 fused_ordering(640) 00:15:14.743 fused_ordering(641) 00:15:14.743 fused_ordering(642) 00:15:14.743 fused_ordering(643) 00:15:14.743 fused_ordering(644) 00:15:14.743 fused_ordering(645) 00:15:14.743 fused_ordering(646) 00:15:14.743 fused_ordering(647) 00:15:14.743 fused_ordering(648) 00:15:14.743 fused_ordering(649) 00:15:14.743 fused_ordering(650) 00:15:14.743 fused_ordering(651) 00:15:14.743 fused_ordering(652) 00:15:14.743 fused_ordering(653) 00:15:14.743 fused_ordering(654) 00:15:14.743 fused_ordering(655) 00:15:14.743 fused_ordering(656) 00:15:14.743 fused_ordering(657) 00:15:14.743 fused_ordering(658) 00:15:14.743 fused_ordering(659) 00:15:14.743 fused_ordering(660) 00:15:14.743 fused_ordering(661) 00:15:14.743 fused_ordering(662) 00:15:14.743 fused_ordering(663) 00:15:14.743 fused_ordering(664) 00:15:14.743 fused_ordering(665) 00:15:14.743 fused_ordering(666) 00:15:14.743 fused_ordering(667) 00:15:14.743 fused_ordering(668) 00:15:14.743 fused_ordering(669) 00:15:14.743 fused_ordering(670) 00:15:14.743 fused_ordering(671) 00:15:14.743 fused_ordering(672) 00:15:14.743 fused_ordering(673) 00:15:14.743 fused_ordering(674) 00:15:14.743 fused_ordering(675) 00:15:14.743 fused_ordering(676) 00:15:14.743 fused_ordering(677) 00:15:14.743 fused_ordering(678) 00:15:14.743 fused_ordering(679) 00:15:14.743 fused_ordering(680) 00:15:14.743 fused_ordering(681) 00:15:14.743 fused_ordering(682) 00:15:14.743 fused_ordering(683) 00:15:14.743 fused_ordering(684) 00:15:14.743 fused_ordering(685) 00:15:14.743 fused_ordering(686) 00:15:14.743 fused_ordering(687) 00:15:14.743 fused_ordering(688) 00:15:14.743 fused_ordering(689) 00:15:14.743 fused_ordering(690) 00:15:14.743 fused_ordering(691) 00:15:14.743 fused_ordering(692) 00:15:14.743 fused_ordering(693) 00:15:14.743 fused_ordering(694) 00:15:14.743 fused_ordering(695) 00:15:14.743 fused_ordering(696) 00:15:14.743 fused_ordering(697) 00:15:14.743 fused_ordering(698) 00:15:14.743 fused_ordering(699) 00:15:14.743 fused_ordering(700) 00:15:14.743 fused_ordering(701) 00:15:14.743 fused_ordering(702) 00:15:14.743 fused_ordering(703) 00:15:14.743 fused_ordering(704) 00:15:14.743 fused_ordering(705) 00:15:14.743 fused_ordering(706) 00:15:14.743 fused_ordering(707) 00:15:14.743 fused_ordering(708) 00:15:14.743 fused_ordering(709) 00:15:14.743 fused_ordering(710) 00:15:14.743 fused_ordering(711) 00:15:14.743 fused_ordering(712) 00:15:14.743 fused_ordering(713) 00:15:14.743 fused_ordering(714) 00:15:14.743 fused_ordering(715) 00:15:14.743 fused_ordering(716) 00:15:14.743 fused_ordering(717) 00:15:14.743 fused_ordering(718) 00:15:14.743 fused_ordering(719) 00:15:14.743 fused_ordering(720) 00:15:14.743 fused_ordering(721) 00:15:14.743 fused_ordering(722) 00:15:14.743 fused_ordering(723) 00:15:14.743 fused_ordering(724) 00:15:14.743 fused_ordering(725) 00:15:14.743 fused_ordering(726) 00:15:14.743 fused_ordering(727) 00:15:14.743 fused_ordering(728) 00:15:14.743 fused_ordering(729) 00:15:14.743 fused_ordering(730) 00:15:14.743 fused_ordering(731) 00:15:14.743 fused_ordering(732) 00:15:14.743 fused_ordering(733) 00:15:14.743 fused_ordering(734) 00:15:14.743 fused_ordering(735) 00:15:14.743 fused_ordering(736) 00:15:14.743 fused_ordering(737) 00:15:14.743 fused_ordering(738) 00:15:14.743 fused_ordering(739) 00:15:14.743 fused_ordering(740) 00:15:14.743 fused_ordering(741) 00:15:14.743 fused_ordering(742) 00:15:14.743 fused_ordering(743) 00:15:14.743 fused_ordering(744) 00:15:14.743 fused_ordering(745) 00:15:14.743 fused_ordering(746) 00:15:14.743 fused_ordering(747) 00:15:14.743 fused_ordering(748) 00:15:14.743 fused_ordering(749) 00:15:14.743 fused_ordering(750) 00:15:14.743 fused_ordering(751) 00:15:14.743 fused_ordering(752) 00:15:14.743 fused_ordering(753) 00:15:14.743 fused_ordering(754) 00:15:14.743 fused_ordering(755) 00:15:14.743 fused_ordering(756) 00:15:14.743 fused_ordering(757) 00:15:14.743 fused_ordering(758) 00:15:14.743 fused_ordering(759) 00:15:14.743 fused_ordering(760) 00:15:14.743 fused_ordering(761) 00:15:14.743 fused_ordering(762) 00:15:14.743 fused_ordering(763) 00:15:14.743 fused_ordering(764) 00:15:14.743 fused_ordering(765) 00:15:14.743 fused_ordering(766) 00:15:14.743 fused_ordering(767) 00:15:14.743 fused_ordering(768) 00:15:14.743 fused_ordering(769) 00:15:14.743 fused_ordering(770) 00:15:14.743 fused_ordering(771) 00:15:14.743 fused_ordering(772) 00:15:14.743 fused_ordering(773) 00:15:14.743 fused_ordering(774) 00:15:14.743 fused_ordering(775) 00:15:14.743 fused_ordering(776) 00:15:14.743 fused_ordering(777) 00:15:14.743 fused_ordering(778) 00:15:14.743 fused_ordering(779) 00:15:14.743 fused_ordering(780) 00:15:14.743 fused_ordering(781) 00:15:14.743 fused_ordering(782) 00:15:14.743 fused_ordering(783) 00:15:14.743 fused_ordering(784) 00:15:14.743 fused_ordering(785) 00:15:14.743 fused_ordering(786) 00:15:14.743 fused_ordering(787) 00:15:14.743 fused_ordering(788) 00:15:14.743 fused_ordering(789) 00:15:14.743 fused_ordering(790) 00:15:14.743 fused_ordering(791) 00:15:14.743 fused_ordering(792) 00:15:14.743 fused_ordering(793) 00:15:14.743 fused_ordering(794) 00:15:14.743 fused_ordering(795) 00:15:14.743 fused_ordering(796) 00:15:14.743 fused_ordering(797) 00:15:14.743 fused_ordering(798) 00:15:14.743 fused_ordering(799) 00:15:14.743 fused_ordering(800) 00:15:14.743 fused_ordering(801) 00:15:14.743 fused_ordering(802) 00:15:14.744 fused_ordering(803) 00:15:14.744 fused_ordering(804) 00:15:14.744 fused_ordering(805) 00:15:14.744 fused_ordering(806) 00:15:14.744 fused_ordering(807) 00:15:14.744 fused_ordering(808) 00:15:14.744 fused_ordering(809) 00:15:14.744 fused_ordering(810) 00:15:14.744 fused_ordering(811) 00:15:14.744 fused_ordering(812) 00:15:14.744 fused_ordering(813) 00:15:14.744 fused_ordering(814) 00:15:14.744 fused_ordering(815) 00:15:14.744 fused_ordering(816) 00:15:14.744 fused_ordering(817) 00:15:14.744 fused_ordering(818) 00:15:14.744 fused_ordering(819) 00:15:14.744 fused_ordering(820) 00:15:15.003 fused_ordering(821) 00:15:15.003 fused_ordering(822) 00:15:15.003 fused_ordering(823) 00:15:15.003 fused_ordering(824) 00:15:15.003 fused_ordering(825) 00:15:15.003 fused_ordering(826) 00:15:15.003 fused_ordering(827) 00:15:15.003 fused_ordering(828) 00:15:15.003 fused_ordering(829) 00:15:15.003 fused_ordering(830) 00:15:15.003 fused_ordering(831) 00:15:15.003 fused_ordering(832) 00:15:15.003 fused_ordering(833) 00:15:15.003 fused_ordering(834) 00:15:15.003 fused_ordering(835) 00:15:15.003 fused_ordering(836) 00:15:15.003 fused_ordering(837) 00:15:15.003 fused_ordering(838) 00:15:15.003 fused_ordering(839) 00:15:15.003 fused_ordering(840) 00:15:15.003 fused_ordering(841) 00:15:15.003 fused_ordering(842) 00:15:15.003 fused_ordering(843) 00:15:15.003 fused_ordering(844) 00:15:15.003 fused_ordering(845) 00:15:15.003 fused_ordering(846) 00:15:15.003 fused_ordering(847) 00:15:15.003 fused_ordering(848) 00:15:15.003 fused_ordering(849) 00:15:15.003 fused_ordering(850) 00:15:15.003 fused_ordering(851) 00:15:15.003 fused_ordering(852) 00:15:15.003 fused_ordering(853) 00:15:15.003 fused_ordering(854) 00:15:15.003 fused_ordering(855) 00:15:15.003 fused_ordering(856) 00:15:15.003 fused_ordering(857) 00:15:15.003 fused_ordering(858) 00:15:15.003 fused_ordering(859) 00:15:15.003 fused_ordering(860) 00:15:15.003 fused_ordering(861) 00:15:15.003 fused_ordering(862) 00:15:15.003 fused_ordering(863) 00:15:15.003 fused_ordering(864) 00:15:15.003 fused_ordering(865) 00:15:15.003 fused_ordering(866) 00:15:15.003 fused_ordering(867) 00:15:15.003 fused_ordering(868) 00:15:15.003 fused_ordering(869) 00:15:15.003 fused_ordering(870) 00:15:15.003 fused_ordering(871) 00:15:15.003 fused_ordering(872) 00:15:15.003 fused_ordering(873) 00:15:15.003 fused_ordering(874) 00:15:15.003 fused_ordering(875) 00:15:15.003 fused_ordering(876) 00:15:15.003 fused_ordering(877) 00:15:15.004 fused_ordering(878) 00:15:15.004 fused_ordering(879) 00:15:15.004 fused_ordering(880) 00:15:15.004 fused_ordering(881) 00:15:15.004 fused_ordering(882) 00:15:15.004 fused_ordering(883) 00:15:15.004 fused_ordering(884) 00:15:15.004 fused_ordering(885) 00:15:15.004 fused_ordering(886) 00:15:15.004 fused_ordering(887) 00:15:15.004 fused_ordering(888) 00:15:15.004 fused_ordering(889) 00:15:15.004 fused_ordering(890) 00:15:15.004 fused_ordering(891) 00:15:15.004 fused_ordering(892) 00:15:15.004 fused_ordering(893) 00:15:15.004 fused_ordering(894) 00:15:15.004 fused_ordering(895) 00:15:15.004 fused_ordering(896) 00:15:15.004 fused_ordering(897) 00:15:15.004 fused_ordering(898) 00:15:15.004 fused_ordering(899) 00:15:15.004 fused_ordering(900) 00:15:15.004 fused_ordering(901) 00:15:15.004 fused_ordering(902) 00:15:15.004 fused_ordering(903) 00:15:15.004 fused_ordering(904) 00:15:15.004 fused_ordering(905) 00:15:15.004 fused_ordering(906) 00:15:15.004 fused_ordering(907) 00:15:15.004 fused_ordering(908) 00:15:15.004 fused_ordering(909) 00:15:15.004 fused_ordering(910) 00:15:15.004 fused_ordering(911) 00:15:15.004 fused_ordering(912) 00:15:15.004 fused_ordering(913) 00:15:15.004 fused_ordering(914) 00:15:15.004 fused_ordering(915) 00:15:15.004 fused_ordering(916) 00:15:15.004 fused_ordering(917) 00:15:15.004 fused_ordering(918) 00:15:15.004 fused_ordering(919) 00:15:15.004 fused_ordering(920) 00:15:15.004 fused_ordering(921) 00:15:15.004 fused_ordering(922) 00:15:15.004 fused_ordering(923) 00:15:15.004 fused_ordering(924) 00:15:15.004 fused_ordering(925) 00:15:15.004 fused_ordering(926) 00:15:15.004 fused_ordering(927) 00:15:15.004 fused_ordering(928) 00:15:15.004 fused_ordering(929) 00:15:15.004 fused_ordering(930) 00:15:15.004 fused_ordering(931) 00:15:15.004 fused_ordering(932) 00:15:15.004 fused_ordering(933) 00:15:15.004 fused_ordering(934) 00:15:15.004 fused_ordering(935) 00:15:15.004 fused_ordering(936) 00:15:15.004 fused_ordering(937) 00:15:15.004 fused_ordering(938) 00:15:15.004 fused_ordering(939) 00:15:15.004 fused_ordering(940) 00:15:15.004 fused_ordering(941) 00:15:15.004 fused_ordering(942) 00:15:15.004 fused_ordering(943) 00:15:15.004 fused_ordering(944) 00:15:15.004 fused_ordering(945) 00:15:15.004 fused_ordering(946) 00:15:15.004 fused_ordering(947) 00:15:15.004 fused_ordering(948) 00:15:15.004 fused_ordering(949) 00:15:15.004 fused_ordering(950) 00:15:15.004 fused_ordering(951) 00:15:15.004 fused_ordering(952) 00:15:15.004 fused_ordering(953) 00:15:15.004 fused_ordering(954) 00:15:15.004 fused_ordering(955) 00:15:15.004 fused_ordering(956) 00:15:15.004 fused_ordering(957) 00:15:15.004 fused_ordering(958) 00:15:15.004 fused_ordering(959) 00:15:15.004 fused_ordering(960) 00:15:15.004 fused_ordering(961) 00:15:15.004 fused_ordering(962) 00:15:15.004 fused_ordering(963) 00:15:15.004 fused_ordering(964) 00:15:15.004 fused_ordering(965) 00:15:15.004 fused_ordering(966) 00:15:15.004 fused_ordering(967) 00:15:15.004 fused_ordering(968) 00:15:15.004 fused_ordering(969) 00:15:15.004 fused_ordering(970) 00:15:15.004 fused_ordering(971) 00:15:15.004 fused_ordering(972) 00:15:15.004 fused_ordering(973) 00:15:15.004 fused_ordering(974) 00:15:15.004 fused_ordering(975) 00:15:15.004 fused_ordering(976) 00:15:15.004 fused_ordering(977) 00:15:15.004 fused_ordering(978) 00:15:15.004 fused_ordering(979) 00:15:15.004 fused_ordering(980) 00:15:15.004 fused_ordering(981) 00:15:15.004 fused_ordering(982) 00:15:15.004 fused_ordering(983) 00:15:15.004 fused_ordering(984) 00:15:15.004 fused_ordering(985) 00:15:15.004 fused_ordering(986) 00:15:15.004 fused_ordering(987) 00:15:15.004 fused_ordering(988) 00:15:15.004 fused_ordering(989) 00:15:15.004 fused_ordering(990) 00:15:15.004 fused_ordering(991) 00:15:15.004 fused_ordering(992) 00:15:15.004 fused_ordering(993) 00:15:15.004 fused_ordering(994) 00:15:15.004 fused_ordering(995) 00:15:15.004 fused_ordering(996) 00:15:15.004 fused_ordering(997) 00:15:15.004 fused_ordering(998) 00:15:15.004 fused_ordering(999) 00:15:15.004 fused_ordering(1000) 00:15:15.004 fused_ordering(1001) 00:15:15.004 fused_ordering(1002) 00:15:15.004 fused_ordering(1003) 00:15:15.004 fused_ordering(1004) 00:15:15.004 fused_ordering(1005) 00:15:15.004 fused_ordering(1006) 00:15:15.004 fused_ordering(1007) 00:15:15.004 fused_ordering(1008) 00:15:15.004 fused_ordering(1009) 00:15:15.004 fused_ordering(1010) 00:15:15.004 fused_ordering(1011) 00:15:15.004 fused_ordering(1012) 00:15:15.004 fused_ordering(1013) 00:15:15.004 fused_ordering(1014) 00:15:15.004 fused_ordering(1015) 00:15:15.004 fused_ordering(1016) 00:15:15.004 fused_ordering(1017) 00:15:15.004 fused_ordering(1018) 00:15:15.004 fused_ordering(1019) 00:15:15.004 fused_ordering(1020) 00:15:15.004 fused_ordering(1021) 00:15:15.004 fused_ordering(1022) 00:15:15.004 fused_ordering(1023) 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:15.004 rmmod nvme_rdma 00:15:15.004 rmmod nvme_fabrics 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 413241 ']' 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 413241 00:15:15.004 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 413241 ']' 00:15:15.005 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 413241 00:15:15.005 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:15.005 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:15.005 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 413241 00:15:15.005 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:15.005 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:15.005 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 413241' 00:15:15.005 killing process with pid 413241 00:15:15.005 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 413241 00:15:15.005 08:51:37 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 413241 00:15:15.264 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:15.264 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:15:15.264 00:15:15.264 real 0m7.512s 00:15:15.264 user 0m3.779s 00:15:15.264 sys 0m4.866s 00:15:15.264 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.264 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:15.264 ************************************ 00:15:15.264 END TEST nvmf_fused_ordering 00:15:15.264 ************************************ 00:15:15.264 08:51:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:15:15.264 08:51:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:15.264 08:51:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.264 08:51:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:15.264 ************************************ 00:15:15.264 START TEST nvmf_ns_masking 00:15:15.264 ************************************ 00:15:15.264 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:15:15.525 * Looking for test storage... 00:15:15.525 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # lcov --version 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:15.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.525 --rc genhtml_branch_coverage=1 00:15:15.525 --rc genhtml_function_coverage=1 00:15:15.525 --rc genhtml_legend=1 00:15:15.525 --rc geninfo_all_blocks=1 00:15:15.525 --rc geninfo_unexecuted_blocks=1 00:15:15.525 00:15:15.525 ' 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:15.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.525 --rc genhtml_branch_coverage=1 00:15:15.525 --rc genhtml_function_coverage=1 00:15:15.525 --rc genhtml_legend=1 00:15:15.525 --rc geninfo_all_blocks=1 00:15:15.525 --rc geninfo_unexecuted_blocks=1 00:15:15.525 00:15:15.525 ' 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:15.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.525 --rc genhtml_branch_coverage=1 00:15:15.525 --rc genhtml_function_coverage=1 00:15:15.525 --rc genhtml_legend=1 00:15:15.525 --rc geninfo_all_blocks=1 00:15:15.525 --rc geninfo_unexecuted_blocks=1 00:15:15.525 00:15:15.525 ' 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:15.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.525 --rc genhtml_branch_coverage=1 00:15:15.525 --rc genhtml_function_coverage=1 00:15:15.525 --rc genhtml_legend=1 00:15:15.525 --rc geninfo_all_blocks=1 00:15:15.525 --rc geninfo_unexecuted_blocks=1 00:15:15.525 00:15:15.525 ' 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.525 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:15.526 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d8f70cb7-ef60-4c53-a4a9-83b9a5dd4168 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=65e132f1-13f1-43f8-938e-58462d902bbf 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c29a8c6a-3fa3-4233-bad3-e1429c44eeaa 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:15.526 08:51:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:15:22.102 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:15:22.102 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:15:22.102 Found net devices under 0000:da:00.0: mlx_0_0 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:15:22.102 Found net devices under 0000:da:00.1: mlx_0_1 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # rdma_device_init 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@528 -- # allocate_nic_ips 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:22.102 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:22.103 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:22.103 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:15:22.103 altname enp218s0f0np0 00:15:22.103 altname ens818f0np0 00:15:22.103 inet 192.168.100.8/24 scope global mlx_0_0 00:15:22.103 valid_lft forever preferred_lft forever 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:22.103 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:22.103 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:15:22.103 altname enp218s0f1np1 00:15:22.103 altname ens818f1np1 00:15:22.103 inet 192.168.100.9/24 scope global mlx_0_1 00:15:22.103 valid_lft forever preferred_lft forever 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:15:22.103 192.168.100.9' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:15:22.103 192.168.100.9' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # head -n 1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:15:22.103 192.168.100.9' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # tail -n +2 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # head -n 1 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=416628 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 416628 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 416628 ']' 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:22.103 [2024-11-06 08:51:44.357439] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:15:22.103 [2024-11-06 08:51:44.357483] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.103 [2024-11-06 08:51:44.432194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.103 [2024-11-06 08:51:44.470351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.103 [2024-11-06 08:51:44.470385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.103 [2024-11-06 08:51:44.470393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.103 [2024-11-06 08:51:44.470400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.103 [2024-11-06 08:51:44.470404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.103 [2024-11-06 08:51:44.470962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.103 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:22.103 [2024-11-06 08:51:44.799019] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14cdb40/0x14d2030) succeed. 00:15:22.104 [2024-11-06 08:51:44.809641] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14ceff0/0x15136d0) succeed. 00:15:22.104 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:22.104 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:22.104 08:51:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:22.104 Malloc1 00:15:22.104 08:51:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:22.363 Malloc2 00:15:22.363 08:51:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:22.621 08:51:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:22.880 08:51:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:22.880 [2024-11-06 08:51:45.879522] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:23.139 08:51:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:23.139 08:51:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c29a8c6a-3fa3-4233-bad3-e1429c44eeaa -a 192.168.100.8 -s 4420 -i 4 00:15:23.428 08:51:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.428 08:51:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:23.428 08:51:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.428 08:51:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:23.428 08:51:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:25.465 [ 0]:0x1 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed67f3d8e8fa418a9089053246ac6ad1 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed67f3d8e8fa418a9089053246ac6ad1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.465 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:25.756 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:25.757 [ 0]:0x1 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed67f3d8e8fa418a9089053246ac6ad1 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed67f3d8e8fa418a9089053246ac6ad1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.757 [ 1]:0x2 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15f72390080d4258a1935fb9ff3e7811 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15f72390080d4258a1935fb9ff3e7811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:25.757 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:26.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.038 08:51:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.330 08:51:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:26.330 08:51:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:26.330 08:51:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c29a8c6a-3fa3-4233-bad3-e1429c44eeaa -a 192.168.100.8 -s 4420 -i 4 00:15:26.588 08:51:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:26.588 08:51:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:26.588 08:51:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.588 08:51:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:26.588 08:51:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:26.589 08:51:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:29.124 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:29.125 [ 0]:0x2 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15f72390080d4258a1935fb9ff3e7811 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15f72390080d4258a1935fb9ff3e7811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:29.125 [ 0]:0x1 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed67f3d8e8fa418a9089053246ac6ad1 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed67f3d8e8fa418a9089053246ac6ad1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.125 08:51:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:29.125 [ 1]:0x2 00:15:29.125 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:29.125 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.125 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15f72390080d4258a1935fb9ff3e7811 00:15:29.125 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15f72390080d4258a1935fb9ff3e7811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.125 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:29.385 [ 0]:0x2 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15f72390080d4258a1935fb9ff3e7811 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15f72390080d4258a1935fb9ff3e7811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:29.385 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.904 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:29.904 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:29.904 08:51:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c29a8c6a-3fa3-4233-bad3-e1429c44eeaa -a 192.168.100.8 -s 4420 -i 4 00:15:30.163 08:51:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:30.163 08:51:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:30.163 08:51:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:30.163 08:51:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:30.163 08:51:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:30.163 08:51:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:32.697 [ 0]:0x1 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed67f3d8e8fa418a9089053246ac6ad1 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed67f3d8e8fa418a9089053246ac6ad1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:32.697 [ 1]:0x2 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15f72390080d4258a1935fb9ff3e7811 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15f72390080d4258a1935fb9ff3e7811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.697 [ 0]:0x2 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15f72390080d4258a1935fb9ff3e7811 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15f72390080d4258a1935fb9ff3e7811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.697 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:15:32.698 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:32.956 [2024-11-06 08:51:55.795151] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:32.956 request: 00:15:32.956 { 00:15:32.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.956 "nsid": 2, 00:15:32.956 "host": "nqn.2016-06.io.spdk:host1", 00:15:32.956 "method": "nvmf_ns_remove_host", 00:15:32.956 "req_id": 1 00:15:32.956 } 00:15:32.956 Got JSON-RPC error response 00:15:32.956 response: 00:15:32.956 { 00:15:32.956 "code": -32602, 00:15:32.956 "message": "Invalid parameters" 00:15:32.956 } 00:15:32.956 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:32.957 [ 0]:0x2 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=15f72390080d4258a1935fb9ff3e7811 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 15f72390080d4258a1935fb9ff3e7811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:32.957 08:51:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=418657 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 418657 /var/tmp/host.sock 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 418657 ']' 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:33.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.525 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:33.525 [2024-11-06 08:51:56.294071] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:15:33.525 [2024-11-06 08:51:56.294119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418657 ] 00:15:33.525 [2024-11-06 08:51:56.373013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.525 [2024-11-06 08:51:56.413577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.785 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.785 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:33.785 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.044 08:51:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:34.044 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d8f70cb7-ef60-4c53-a4a9-83b9a5dd4168 00:15:34.044 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:34.044 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D8F70CB7EF604C53A4A983B9A5DD4168 -i 00:15:34.303 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 65e132f1-13f1-43f8-938e-58462d902bbf 00:15:34.303 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:34.303 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 65E132F113F143F8938E58462D902BBF -i 00:15:34.562 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:34.821 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:34.821 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:34.821 08:51:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:35.080 nvme0n1 00:15:35.340 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:35.340 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:35.340 nvme1n2 00:15:35.599 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:35.599 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:35.599 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:35.599 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:35.599 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:35.599 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:35.599 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:35.600 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:35.600 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:35.858 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d8f70cb7-ef60-4c53-a4a9-83b9a5dd4168 == \d\8\f\7\0\c\b\7\-\e\f\6\0\-\4\c\5\3\-\a\4\a\9\-\8\3\b\9\a\5\d\d\4\1\6\8 ]] 00:15:35.858 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:35.858 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:35.858 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:36.117 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 65e132f1-13f1-43f8-938e-58462d902bbf == \6\5\e\1\3\2\f\1\-\1\3\f\1\-\4\3\f\8\-\9\3\8\e\-\5\8\4\6\2\d\9\0\2\b\b\f ]] 00:15:36.117 08:51:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d8f70cb7-ef60-4c53-a4a9-83b9a5dd4168 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D8F70CB7EF604C53A4A983B9A5DD4168 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D8F70CB7EF604C53A4A983B9A5DD4168 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:15:36.375 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D8F70CB7EF604C53A4A983B9A5DD4168 00:15:36.634 [2024-11-06 08:51:59.526193] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:36.634 [2024-11-06 08:51:59.526232] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:36.634 [2024-11-06 08:51:59.526241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.634 request: 00:15:36.634 { 00:15:36.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.634 "namespace": { 00:15:36.634 "bdev_name": "invalid", 00:15:36.634 "nsid": 1, 00:15:36.634 "nguid": "D8F70CB7EF604C53A4A983B9A5DD4168", 00:15:36.634 "no_auto_visible": false, 00:15:36.634 "no_metadata": false 00:15:36.634 }, 00:15:36.634 "method": "nvmf_subsystem_add_ns", 00:15:36.634 "req_id": 1 00:15:36.634 } 00:15:36.634 Got JSON-RPC error response 00:15:36.634 response: 00:15:36.634 { 00:15:36.634 "code": -32602, 00:15:36.634 "message": "Invalid parameters" 00:15:36.634 } 00:15:36.634 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:36.634 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:36.634 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:36.634 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:36.634 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d8f70cb7-ef60-4c53-a4a9-83b9a5dd4168 00:15:36.634 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:36.634 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D8F70CB7EF604C53A4A983B9A5DD4168 -i 00:15:36.893 08:51:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:38.796 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:38.796 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:38.796 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:39.055 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:39.055 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 418657 00:15:39.055 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 418657 ']' 00:15:39.056 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 418657 00:15:39.056 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:39.056 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.056 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 418657 00:15:39.056 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:39.056 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:39.056 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 418657' 00:15:39.056 killing process with pid 418657 00:15:39.056 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 418657 00:15:39.056 08:52:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 418657 00:15:39.314 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:39.574 rmmod nvme_rdma 00:15:39.574 rmmod nvme_fabrics 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 416628 ']' 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 416628 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 416628 ']' 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 416628 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.574 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 416628 00:15:39.833 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.833 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.833 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 416628' 00:15:39.833 killing process with pid 416628 00:15:39.833 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 416628 00:15:39.833 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 416628 00:15:39.833 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:39.833 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:15:39.833 00:15:39.833 real 0m24.631s 00:15:39.833 user 0m32.103s 00:15:39.833 sys 0m6.445s 00:15:39.833 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.833 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:39.833 ************************************ 00:15:39.833 END TEST nvmf_ns_masking 00:15:39.833 ************************************ 00:15:40.091 08:52:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:40.091 08:52:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:15:40.091 08:52:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:40.091 08:52:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.091 08:52:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.091 ************************************ 00:15:40.091 START TEST nvmf_nvme_cli 00:15:40.091 ************************************ 00:15:40.092 08:52:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:15:40.092 * Looking for test storage... 00:15:40.092 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # lcov --version 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:40.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.092 --rc genhtml_branch_coverage=1 00:15:40.092 --rc genhtml_function_coverage=1 00:15:40.092 --rc genhtml_legend=1 00:15:40.092 --rc geninfo_all_blocks=1 00:15:40.092 --rc geninfo_unexecuted_blocks=1 00:15:40.092 00:15:40.092 ' 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:40.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.092 --rc genhtml_branch_coverage=1 00:15:40.092 --rc genhtml_function_coverage=1 00:15:40.092 --rc genhtml_legend=1 00:15:40.092 --rc geninfo_all_blocks=1 00:15:40.092 --rc geninfo_unexecuted_blocks=1 00:15:40.092 00:15:40.092 ' 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:40.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.092 --rc genhtml_branch_coverage=1 00:15:40.092 --rc genhtml_function_coverage=1 00:15:40.092 --rc genhtml_legend=1 00:15:40.092 --rc geninfo_all_blocks=1 00:15:40.092 --rc geninfo_unexecuted_blocks=1 00:15:40.092 00:15:40.092 ' 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:40.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.092 --rc genhtml_branch_coverage=1 00:15:40.092 --rc genhtml_function_coverage=1 00:15:40.092 --rc genhtml_legend=1 00:15:40.092 --rc geninfo_all_blocks=1 00:15:40.092 --rc geninfo_unexecuted_blocks=1 00:15:40.092 00:15:40.092 ' 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.092 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.351 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.352 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:40.352 08:52:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:15:46.921 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:15:46.921 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:15:46.921 Found net devices under 0000:da:00.0: mlx_0_0 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:15:46.921 Found net devices under 0000:da:00.1: mlx_0_1 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # rdma_device_init 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:46.921 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # allocate_nic_ips 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:46.922 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:46.922 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:15:46.922 altname enp218s0f0np0 00:15:46.922 altname ens818f0np0 00:15:46.922 inet 192.168.100.8/24 scope global mlx_0_0 00:15:46.922 valid_lft forever preferred_lft forever 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:46.922 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:46.922 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:15:46.922 altname enp218s0f1np1 00:15:46.922 altname ens818f1np1 00:15:46.922 inet 192.168.100.9/24 scope global mlx_0_1 00:15:46.922 valid_lft forever preferred_lft forever 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:15:46.922 192.168.100.9' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:15:46.922 192.168.100.9' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # head -n 1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:15:46.922 192.168.100.9' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # tail -n +2 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # head -n 1 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=422913 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 422913 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 422913 ']' 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.922 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.923 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.923 08:52:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 [2024-11-06 08:52:09.008934] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:15:46.923 [2024-11-06 08:52:09.008986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.923 [2024-11-06 08:52:09.085379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.923 [2024-11-06 08:52:09.128138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.923 [2024-11-06 08:52:09.128178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.923 [2024-11-06 08:52:09.128187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.923 [2024-11-06 08:52:09.128193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.923 [2024-11-06 08:52:09.128197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.923 [2024-11-06 08:52:09.129714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.923 [2024-11-06 08:52:09.129823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.923 [2024-11-06 08:52:09.129917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.923 [2024-11-06 08:52:09.129918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 [2024-11-06 08:52:09.296064] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd6eda0/0xd73290) succeed. 00:15:46.923 [2024-11-06 08:52:09.304994] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd70430/0xdb4930) succeed. 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 Malloc0 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 Malloc1 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 [2024-11-06 08:52:09.526518] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:15:46.923 00:15:46.923 Discovery Log Number of Records 2, Generation counter 2 00:15:46.923 =====Discovery Log Entry 0====== 00:15:46.923 trtype: rdma 00:15:46.923 adrfam: ipv4 00:15:46.923 subtype: current discovery subsystem 00:15:46.923 treq: not required 00:15:46.923 portid: 0 00:15:46.923 trsvcid: 4420 00:15:46.923 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:46.923 traddr: 192.168.100.8 00:15:46.923 eflags: explicit discovery connections, duplicate discovery information 00:15:46.923 rdma_prtype: not specified 00:15:46.923 rdma_qptype: connected 00:15:46.923 rdma_cms: rdma-cm 00:15:46.923 rdma_pkey: 0x0000 00:15:46.923 =====Discovery Log Entry 1====== 00:15:46.923 trtype: rdma 00:15:46.923 adrfam: ipv4 00:15:46.923 subtype: nvme subsystem 00:15:46.923 treq: not required 00:15:46.923 portid: 0 00:15:46.923 trsvcid: 4420 00:15:46.923 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:46.923 traddr: 192.168.100.8 00:15:46.923 eflags: none 00:15:46.923 rdma_prtype: not specified 00:15:46.923 rdma_qptype: connected 00:15:46.923 rdma_cms: rdma-cm 00:15:46.923 rdma_pkey: 0x0000 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:46.923 08:52:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:47.859 08:52:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:47.859 08:52:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:47.859 08:52:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:47.859 08:52:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:47.859 08:52:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:47.859 08:52:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:49.764 /dev/nvme0n2 ]] 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:49.764 08:52:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.701 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:50.701 rmmod nvme_rdma 00:15:50.701 rmmod nvme_fabrics 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 422913 ']' 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 422913 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 422913 ']' 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 422913 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 422913 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 422913' 00:15:50.961 killing process with pid 422913 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 422913 00:15:50.961 08:52:13 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 422913 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:15:51.220 00:15:51.220 real 0m11.168s 00:15:51.220 user 0m21.344s 00:15:51.220 sys 0m4.941s 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.220 ************************************ 00:15:51.220 END TEST nvmf_nvme_cli 00:15:51.220 ************************************ 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.220 ************************************ 00:15:51.220 START TEST nvmf_auth_target 00:15:51.220 ************************************ 00:15:51.220 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:15:51.480 * Looking for test storage... 00:15:51.480 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # lcov --version 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:51.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.480 --rc genhtml_branch_coverage=1 00:15:51.480 --rc genhtml_function_coverage=1 00:15:51.480 --rc genhtml_legend=1 00:15:51.480 --rc geninfo_all_blocks=1 00:15:51.480 --rc geninfo_unexecuted_blocks=1 00:15:51.480 00:15:51.480 ' 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:51.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.480 --rc genhtml_branch_coverage=1 00:15:51.480 --rc genhtml_function_coverage=1 00:15:51.480 --rc genhtml_legend=1 00:15:51.480 --rc geninfo_all_blocks=1 00:15:51.480 --rc geninfo_unexecuted_blocks=1 00:15:51.480 00:15:51.480 ' 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:51.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.480 --rc genhtml_branch_coverage=1 00:15:51.480 --rc genhtml_function_coverage=1 00:15:51.480 --rc genhtml_legend=1 00:15:51.480 --rc geninfo_all_blocks=1 00:15:51.480 --rc geninfo_unexecuted_blocks=1 00:15:51.480 00:15:51.480 ' 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:51.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.480 --rc genhtml_branch_coverage=1 00:15:51.480 --rc genhtml_function_coverage=1 00:15:51.480 --rc genhtml_legend=1 00:15:51.480 --rc geninfo_all_blocks=1 00:15:51.480 --rc geninfo_unexecuted_blocks=1 00:15:51.480 00:15:51.480 ' 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.480 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.481 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:51.481 08:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.066 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:58.066 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:58.066 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:58.066 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:58.066 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:58.066 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:58.066 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:58.066 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:15:58.067 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:15:58.067 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:15:58.067 Found net devices under 0000:da:00.0: mlx_0_0 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:15:58.067 Found net devices under 0000:da:00.1: mlx_0_1 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # rdma_device_init 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:15:58.067 08:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.067 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:58.068 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.068 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:15:58.068 altname enp218s0f0np0 00:15:58.068 altname ens818f0np0 00:15:58.068 inet 192.168.100.8/24 scope global mlx_0_0 00:15:58.068 valid_lft forever preferred_lft forever 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:58.068 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.068 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:15:58.068 altname enp218s0f1np1 00:15:58.068 altname ens818f1np1 00:15:58.068 inet 192.168.100.9/24 scope global mlx_0_1 00:15:58.068 valid_lft forever preferred_lft forever 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:15:58.068 192.168.100.9' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:15:58.068 192.168.100.9' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # head -n 1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:15:58.068 192.168.100.9' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # tail -n +2 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # head -n 1 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=426944 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 426944 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 426944 ']' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=427106 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:15:58.068 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4bfeea99c5d1066e071e4ef697732f01515db7204802adb9 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.JR0 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4bfeea99c5d1066e071e4ef697732f01515db7204802adb9 0 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4bfeea99c5d1066e071e4ef697732f01515db7204802adb9 0 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4bfeea99c5d1066e071e4ef697732f01515db7204802adb9 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.JR0 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.JR0 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.JR0 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a10ed0ba9242e21c834a70d631dba9710bc03c0d42a5fc266a23c616107d8e61 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.NxL 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a10ed0ba9242e21c834a70d631dba9710bc03c0d42a5fc266a23c616107d8e61 3 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a10ed0ba9242e21c834a70d631dba9710bc03c0d42a5fc266a23c616107d8e61 3 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a10ed0ba9242e21c834a70d631dba9710bc03c0d42a5fc266a23c616107d8e61 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.NxL 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.NxL 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.NxL 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=47e5cc65737e7dfaab821da57b045eec 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.kOQ 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 47e5cc65737e7dfaab821da57b045eec 1 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 47e5cc65737e7dfaab821da57b045eec 1 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=47e5cc65737e7dfaab821da57b045eec 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.kOQ 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.kOQ 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kOQ 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=1e4006baf47605d196b5542e683f0fd91697fd03037198e8 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.RZF 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 1e4006baf47605d196b5542e683f0fd91697fd03037198e8 2 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 1e4006baf47605d196b5542e683f0fd91697fd03037198e8 2 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=1e4006baf47605d196b5542e683f0fd91697fd03037198e8 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.RZF 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.RZF 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.RZF 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ef22775d2a551ebbfb02d0ddc983698f0c977acacaae891d 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.MUP 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ef22775d2a551ebbfb02d0ddc983698f0c977acacaae891d 2 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ef22775d2a551ebbfb02d0ddc983698f0c977acacaae891d 2 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ef22775d2a551ebbfb02d0ddc983698f0c977acacaae891d 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.MUP 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.MUP 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.MUP 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:58.069 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=68d2a4781075a7a70bf00bc0d381f7be 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.PUX 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 68d2a4781075a7a70bf00bc0d381f7be 1 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 68d2a4781075a7a70bf00bc0d381f7be 1 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=68d2a4781075a7a70bf00bc0d381f7be 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.PUX 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.PUX 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.PUX 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4000ec72821bd59d6764061695a0601c10f45bbfb9dc9a74c6bdcbb232cf7e28 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.G2e 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4000ec72821bd59d6764061695a0601c10f45bbfb9dc9a74c6bdcbb232cf7e28 3 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4000ec72821bd59d6764061695a0601c10f45bbfb9dc9a74c6bdcbb232cf7e28 3 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4000ec72821bd59d6764061695a0601c10f45bbfb9dc9a74c6bdcbb232cf7e28 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.G2e 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.G2e 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.G2e 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 426944 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 426944 ']' 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.070 08:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 427106 /var/tmp/host.sock 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 427106 ']' 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:58.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.329 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.589 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.589 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:58.589 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.JR0 00:15:58.589 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.589 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.589 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.589 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.JR0 00:15:58.589 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.JR0 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.NxL ]] 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NxL 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NxL 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NxL 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kOQ 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kOQ 00:15:58.848 08:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kOQ 00:15:59.108 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.RZF ]] 00:15:59.108 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RZF 00:15:59.108 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.108 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.108 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.108 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RZF 00:15:59.108 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RZF 00:15:59.367 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:59.367 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.MUP 00:15:59.367 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.367 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.367 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.367 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.MUP 00:15:59.367 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.MUP 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.PUX ]] 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PUX 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PUX 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PUX 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.G2e 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.G2e 00:15:59.626 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.G2e 00:15:59.885 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:59.885 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:59.885 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.885 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.886 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:59.886 08:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.145 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.403 00:16:00.404 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.404 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.404 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.663 { 00:16:00.663 "cntlid": 1, 00:16:00.663 "qid": 0, 00:16:00.663 "state": "enabled", 00:16:00.663 "thread": "nvmf_tgt_poll_group_000", 00:16:00.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:00.663 "listen_address": { 00:16:00.663 "trtype": "RDMA", 00:16:00.663 "adrfam": "IPv4", 00:16:00.663 "traddr": "192.168.100.8", 00:16:00.663 "trsvcid": "4420" 00:16:00.663 }, 00:16:00.663 "peer_address": { 00:16:00.663 "trtype": "RDMA", 00:16:00.663 "adrfam": "IPv4", 00:16:00.663 "traddr": "192.168.100.8", 00:16:00.663 "trsvcid": "51951" 00:16:00.663 }, 00:16:00.663 "auth": { 00:16:00.663 "state": "completed", 00:16:00.663 "digest": "sha256", 00:16:00.663 "dhgroup": "null" 00:16:00.663 } 00:16:00.663 } 00:16:00.663 ]' 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.663 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.922 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:00.923 08:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.119 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.119 08:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.119 { 00:16:05.119 "cntlid": 3, 00:16:05.119 "qid": 0, 00:16:05.119 "state": "enabled", 00:16:05.119 "thread": "nvmf_tgt_poll_group_000", 00:16:05.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:05.119 "listen_address": { 00:16:05.119 "trtype": "RDMA", 00:16:05.119 "adrfam": "IPv4", 00:16:05.119 "traddr": "192.168.100.8", 00:16:05.119 "trsvcid": "4420" 00:16:05.119 }, 00:16:05.119 "peer_address": { 00:16:05.119 "trtype": "RDMA", 00:16:05.119 "adrfam": "IPv4", 00:16:05.119 "traddr": "192.168.100.8", 00:16:05.119 "trsvcid": "56927" 00:16:05.119 }, 00:16:05.119 "auth": { 00:16:05.119 "state": "completed", 00:16:05.119 "digest": "sha256", 00:16:05.119 "dhgroup": "null" 00:16:05.119 } 00:16:05.119 } 00:16:05.119 ]' 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:05.119 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.378 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.378 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.378 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.378 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:05.378 08:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:06.315 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.315 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:06.315 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.315 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.315 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.315 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.315 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:06.315 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.575 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.575 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.845 { 00:16:06.845 "cntlid": 5, 00:16:06.845 "qid": 0, 00:16:06.845 "state": "enabled", 00:16:06.845 "thread": "nvmf_tgt_poll_group_000", 00:16:06.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:06.845 "listen_address": { 00:16:06.845 "trtype": "RDMA", 00:16:06.845 "adrfam": "IPv4", 00:16:06.845 "traddr": "192.168.100.8", 00:16:06.845 "trsvcid": "4420" 00:16:06.845 }, 00:16:06.845 "peer_address": { 00:16:06.845 "trtype": "RDMA", 00:16:06.845 "adrfam": "IPv4", 00:16:06.845 "traddr": "192.168.100.8", 00:16:06.845 "trsvcid": "56854" 00:16:06.845 }, 00:16:06.845 "auth": { 00:16:06.845 "state": "completed", 00:16:06.845 "digest": "sha256", 00:16:06.845 "dhgroup": "null" 00:16:06.845 } 00:16:06.845 } 00:16:06.845 ]' 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.845 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.105 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:07.105 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.105 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.105 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.105 08:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.364 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:07.364 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:07.931 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.931 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:07.931 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.931 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.931 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.931 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.931 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.931 08:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.191 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.450 00:16:08.450 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.450 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.450 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.709 { 00:16:08.709 "cntlid": 7, 00:16:08.709 "qid": 0, 00:16:08.709 "state": "enabled", 00:16:08.709 "thread": "nvmf_tgt_poll_group_000", 00:16:08.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:08.709 "listen_address": { 00:16:08.709 "trtype": "RDMA", 00:16:08.709 "adrfam": "IPv4", 00:16:08.709 "traddr": "192.168.100.8", 00:16:08.709 "trsvcid": "4420" 00:16:08.709 }, 00:16:08.709 "peer_address": { 00:16:08.709 "trtype": "RDMA", 00:16:08.709 "adrfam": "IPv4", 00:16:08.709 "traddr": "192.168.100.8", 00:16:08.709 "trsvcid": "33657" 00:16:08.709 }, 00:16:08.709 "auth": { 00:16:08.709 "state": "completed", 00:16:08.709 "digest": "sha256", 00:16:08.709 "dhgroup": "null" 00:16:08.709 } 00:16:08.709 } 00:16:08.709 ]' 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.709 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.967 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:08.967 08:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:09.535 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.795 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:09.795 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.795 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.795 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.795 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.795 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.795 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.795 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.053 08:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.312 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.312 { 00:16:10.312 "cntlid": 9, 00:16:10.312 "qid": 0, 00:16:10.312 "state": "enabled", 00:16:10.312 "thread": "nvmf_tgt_poll_group_000", 00:16:10.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:10.312 "listen_address": { 00:16:10.312 "trtype": "RDMA", 00:16:10.312 "adrfam": "IPv4", 00:16:10.312 "traddr": "192.168.100.8", 00:16:10.312 "trsvcid": "4420" 00:16:10.312 }, 00:16:10.312 "peer_address": { 00:16:10.312 "trtype": "RDMA", 00:16:10.312 "adrfam": "IPv4", 00:16:10.312 "traddr": "192.168.100.8", 00:16:10.312 "trsvcid": "50653" 00:16:10.312 }, 00:16:10.312 "auth": { 00:16:10.312 "state": "completed", 00:16:10.312 "digest": "sha256", 00:16:10.312 "dhgroup": "ffdhe2048" 00:16:10.312 } 00:16:10.312 } 00:16:10.312 ]' 00:16:10.312 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.570 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.570 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.570 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.570 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.570 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.570 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.570 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.829 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:10.829 08:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:11.397 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.397 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.397 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.397 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.397 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.397 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.397 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:11.397 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.657 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.916 00:16:11.916 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.916 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.916 08:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.175 { 00:16:12.175 "cntlid": 11, 00:16:12.175 "qid": 0, 00:16:12.175 "state": "enabled", 00:16:12.175 "thread": "nvmf_tgt_poll_group_000", 00:16:12.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:12.175 "listen_address": { 00:16:12.175 "trtype": "RDMA", 00:16:12.175 "adrfam": "IPv4", 00:16:12.175 "traddr": "192.168.100.8", 00:16:12.175 "trsvcid": "4420" 00:16:12.175 }, 00:16:12.175 "peer_address": { 00:16:12.175 "trtype": "RDMA", 00:16:12.175 "adrfam": "IPv4", 00:16:12.175 "traddr": "192.168.100.8", 00:16:12.175 "trsvcid": "52375" 00:16:12.175 }, 00:16:12.175 "auth": { 00:16:12.175 "state": "completed", 00:16:12.175 "digest": "sha256", 00:16:12.175 "dhgroup": "ffdhe2048" 00:16:12.175 } 00:16:12.175 } 00:16:12.175 ]' 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.175 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.434 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.434 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.434 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.434 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:12.434 08:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:13.372 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.372 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.372 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.372 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.372 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.372 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.372 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.372 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.631 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.631 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.889 { 00:16:13.889 "cntlid": 13, 00:16:13.889 "qid": 0, 00:16:13.889 "state": "enabled", 00:16:13.889 "thread": "nvmf_tgt_poll_group_000", 00:16:13.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:13.889 "listen_address": { 00:16:13.889 "trtype": "RDMA", 00:16:13.889 "adrfam": "IPv4", 00:16:13.889 "traddr": "192.168.100.8", 00:16:13.889 "trsvcid": "4420" 00:16:13.889 }, 00:16:13.889 "peer_address": { 00:16:13.889 "trtype": "RDMA", 00:16:13.889 "adrfam": "IPv4", 00:16:13.889 "traddr": "192.168.100.8", 00:16:13.889 "trsvcid": "48552" 00:16:13.889 }, 00:16:13.889 "auth": { 00:16:13.889 "state": "completed", 00:16:13.889 "digest": "sha256", 00:16:13.889 "dhgroup": "ffdhe2048" 00:16:13.889 } 00:16:13.889 } 00:16:13.889 ]' 00:16:13.889 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.149 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.149 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.149 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.149 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.149 08:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.149 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.149 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.407 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:14.407 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:14.975 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.975 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.975 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.975 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.975 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.234 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.235 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.235 08:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.235 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.493 00:16:15.493 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.493 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.493 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.752 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.752 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.752 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.752 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.752 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.752 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.752 { 00:16:15.752 "cntlid": 15, 00:16:15.752 "qid": 0, 00:16:15.752 "state": "enabled", 00:16:15.752 "thread": "nvmf_tgt_poll_group_000", 00:16:15.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.752 "listen_address": { 00:16:15.752 "trtype": "RDMA", 00:16:15.752 "adrfam": "IPv4", 00:16:15.752 "traddr": "192.168.100.8", 00:16:15.752 "trsvcid": "4420" 00:16:15.752 }, 00:16:15.752 "peer_address": { 00:16:15.752 "trtype": "RDMA", 00:16:15.753 "adrfam": "IPv4", 00:16:15.753 "traddr": "192.168.100.8", 00:16:15.753 "trsvcid": "50538" 00:16:15.753 }, 00:16:15.753 "auth": { 00:16:15.753 "state": "completed", 00:16:15.753 "digest": "sha256", 00:16:15.753 "dhgroup": "ffdhe2048" 00:16:15.753 } 00:16:15.753 } 00:16:15.753 ]' 00:16:15.753 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.753 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.753 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.753 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.753 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.011 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.011 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.011 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.012 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:16.012 08:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.950 08:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.209 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.469 { 00:16:17.469 "cntlid": 17, 00:16:17.469 "qid": 0, 00:16:17.469 "state": "enabled", 00:16:17.469 "thread": "nvmf_tgt_poll_group_000", 00:16:17.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:17.469 "listen_address": { 00:16:17.469 "trtype": "RDMA", 00:16:17.469 "adrfam": "IPv4", 00:16:17.469 "traddr": "192.168.100.8", 00:16:17.469 "trsvcid": "4420" 00:16:17.469 }, 00:16:17.469 "peer_address": { 00:16:17.469 "trtype": "RDMA", 00:16:17.469 "adrfam": "IPv4", 00:16:17.469 "traddr": "192.168.100.8", 00:16:17.469 "trsvcid": "33720" 00:16:17.469 }, 00:16:17.469 "auth": { 00:16:17.469 "state": "completed", 00:16:17.469 "digest": "sha256", 00:16:17.469 "dhgroup": "ffdhe3072" 00:16:17.469 } 00:16:17.469 } 00:16:17.469 ]' 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.469 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.728 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.728 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.728 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.728 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.728 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.728 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.987 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:17.987 08:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:18.556 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.556 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.556 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.556 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.556 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.556 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.556 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.816 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.075 00:16:19.075 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.075 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.075 08:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.333 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.333 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.333 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.333 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.333 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.333 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.333 { 00:16:19.333 "cntlid": 19, 00:16:19.333 "qid": 0, 00:16:19.333 "state": "enabled", 00:16:19.333 "thread": "nvmf_tgt_poll_group_000", 00:16:19.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:19.333 "listen_address": { 00:16:19.333 "trtype": "RDMA", 00:16:19.333 "adrfam": "IPv4", 00:16:19.333 "traddr": "192.168.100.8", 00:16:19.333 "trsvcid": "4420" 00:16:19.333 }, 00:16:19.333 "peer_address": { 00:16:19.334 "trtype": "RDMA", 00:16:19.334 "adrfam": "IPv4", 00:16:19.334 "traddr": "192.168.100.8", 00:16:19.334 "trsvcid": "51862" 00:16:19.334 }, 00:16:19.334 "auth": { 00:16:19.334 "state": "completed", 00:16:19.334 "digest": "sha256", 00:16:19.334 "dhgroup": "ffdhe3072" 00:16:19.334 } 00:16:19.334 } 00:16:19.334 ]' 00:16:19.334 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.334 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.334 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.334 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:19.334 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.334 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.334 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.334 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.593 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:19.593 08:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.530 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.789 00:16:20.789 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.789 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.789 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.048 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.048 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.048 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.048 08:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.048 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.048 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.048 { 00:16:21.048 "cntlid": 21, 00:16:21.048 "qid": 0, 00:16:21.048 "state": "enabled", 00:16:21.048 "thread": "nvmf_tgt_poll_group_000", 00:16:21.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:21.048 "listen_address": { 00:16:21.048 "trtype": "RDMA", 00:16:21.048 "adrfam": "IPv4", 00:16:21.048 "traddr": "192.168.100.8", 00:16:21.048 "trsvcid": "4420" 00:16:21.048 }, 00:16:21.048 "peer_address": { 00:16:21.048 "trtype": "RDMA", 00:16:21.048 "adrfam": "IPv4", 00:16:21.048 "traddr": "192.168.100.8", 00:16:21.048 "trsvcid": "54691" 00:16:21.048 }, 00:16:21.048 "auth": { 00:16:21.048 "state": "completed", 00:16:21.048 "digest": "sha256", 00:16:21.048 "dhgroup": "ffdhe3072" 00:16:21.048 } 00:16:21.048 } 00:16:21.048 ]' 00:16:21.048 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.048 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.048 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.307 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.308 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.308 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.308 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.308 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.567 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:21.567 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:22.135 08:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.135 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:22.135 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.135 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.135 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.135 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.135 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:22.135 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:22.393 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:22.393 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.393 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.393 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:22.393 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.394 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.394 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:22.394 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.394 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.394 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.394 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.394 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.394 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.652 00:16:22.652 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.652 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.652 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.910 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.910 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.910 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.910 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.910 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.910 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.910 { 00:16:22.910 "cntlid": 23, 00:16:22.910 "qid": 0, 00:16:22.910 "state": "enabled", 00:16:22.910 "thread": "nvmf_tgt_poll_group_000", 00:16:22.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:22.910 "listen_address": { 00:16:22.910 "trtype": "RDMA", 00:16:22.910 "adrfam": "IPv4", 00:16:22.911 "traddr": "192.168.100.8", 00:16:22.911 "trsvcid": "4420" 00:16:22.911 }, 00:16:22.911 "peer_address": { 00:16:22.911 "trtype": "RDMA", 00:16:22.911 "adrfam": "IPv4", 00:16:22.911 "traddr": "192.168.100.8", 00:16:22.911 "trsvcid": "59143" 00:16:22.911 }, 00:16:22.911 "auth": { 00:16:22.911 "state": "completed", 00:16:22.911 "digest": "sha256", 00:16:22.911 "dhgroup": "ffdhe3072" 00:16:22.911 } 00:16:22.911 } 00:16:22.911 ]' 00:16:22.911 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.911 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.911 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.170 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.170 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.170 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.170 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.170 08:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.170 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:23.170 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:24.107 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.107 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.107 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.107 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.107 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.107 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.107 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.107 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:24.107 08:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.366 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.625 00:16:24.625 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.625 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.625 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.883 { 00:16:24.883 "cntlid": 25, 00:16:24.883 "qid": 0, 00:16:24.883 "state": "enabled", 00:16:24.883 "thread": "nvmf_tgt_poll_group_000", 00:16:24.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:24.883 "listen_address": { 00:16:24.883 "trtype": "RDMA", 00:16:24.883 "adrfam": "IPv4", 00:16:24.883 "traddr": "192.168.100.8", 00:16:24.883 "trsvcid": "4420" 00:16:24.883 }, 00:16:24.883 "peer_address": { 00:16:24.883 "trtype": "RDMA", 00:16:24.883 "adrfam": "IPv4", 00:16:24.883 "traddr": "192.168.100.8", 00:16:24.883 "trsvcid": "33404" 00:16:24.883 }, 00:16:24.883 "auth": { 00:16:24.883 "state": "completed", 00:16:24.883 "digest": "sha256", 00:16:24.883 "dhgroup": "ffdhe4096" 00:16:24.883 } 00:16:24.883 } 00:16:24.883 ]' 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.883 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.141 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:25.141 08:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:25.714 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.714 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.714 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.714 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.974 08:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.233 00:16:26.233 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.233 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.233 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.493 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.493 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.493 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.493 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.493 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.493 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.493 { 00:16:26.493 "cntlid": 27, 00:16:26.493 "qid": 0, 00:16:26.493 "state": "enabled", 00:16:26.493 "thread": "nvmf_tgt_poll_group_000", 00:16:26.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.493 "listen_address": { 00:16:26.493 "trtype": "RDMA", 00:16:26.493 "adrfam": "IPv4", 00:16:26.493 "traddr": "192.168.100.8", 00:16:26.493 "trsvcid": "4420" 00:16:26.493 }, 00:16:26.493 "peer_address": { 00:16:26.493 "trtype": "RDMA", 00:16:26.493 "adrfam": "IPv4", 00:16:26.493 "traddr": "192.168.100.8", 00:16:26.493 "trsvcid": "46908" 00:16:26.493 }, 00:16:26.493 "auth": { 00:16:26.493 "state": "completed", 00:16:26.493 "digest": "sha256", 00:16:26.493 "dhgroup": "ffdhe4096" 00:16:26.493 } 00:16:26.493 } 00:16:26.493 ]' 00:16:26.493 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.493 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.493 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.751 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.751 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.751 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.751 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.751 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.009 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:27.009 08:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:27.576 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.576 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.576 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.576 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.576 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.576 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.576 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.576 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.835 08:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.094 00:16:28.094 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.094 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.094 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.353 { 00:16:28.353 "cntlid": 29, 00:16:28.353 "qid": 0, 00:16:28.353 "state": "enabled", 00:16:28.353 "thread": "nvmf_tgt_poll_group_000", 00:16:28.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.353 "listen_address": { 00:16:28.353 "trtype": "RDMA", 00:16:28.353 "adrfam": "IPv4", 00:16:28.353 "traddr": "192.168.100.8", 00:16:28.353 "trsvcid": "4420" 00:16:28.353 }, 00:16:28.353 "peer_address": { 00:16:28.353 "trtype": "RDMA", 00:16:28.353 "adrfam": "IPv4", 00:16:28.353 "traddr": "192.168.100.8", 00:16:28.353 "trsvcid": "57029" 00:16:28.353 }, 00:16:28.353 "auth": { 00:16:28.353 "state": "completed", 00:16:28.353 "digest": "sha256", 00:16:28.353 "dhgroup": "ffdhe4096" 00:16:28.353 } 00:16:28.353 } 00:16:28.353 ]' 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.353 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.612 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:28.612 08:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:29.180 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.439 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.439 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.439 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.439 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.439 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.439 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.439 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.698 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.957 00:16:29.957 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.957 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.957 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.957 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.957 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.957 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.957 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.217 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.217 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.217 { 00:16:30.217 "cntlid": 31, 00:16:30.217 "qid": 0, 00:16:30.217 "state": "enabled", 00:16:30.217 "thread": "nvmf_tgt_poll_group_000", 00:16:30.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:30.217 "listen_address": { 00:16:30.217 "trtype": "RDMA", 00:16:30.217 "adrfam": "IPv4", 00:16:30.217 "traddr": "192.168.100.8", 00:16:30.217 "trsvcid": "4420" 00:16:30.217 }, 00:16:30.217 "peer_address": { 00:16:30.217 "trtype": "RDMA", 00:16:30.217 "adrfam": "IPv4", 00:16:30.217 "traddr": "192.168.100.8", 00:16:30.217 "trsvcid": "48340" 00:16:30.217 }, 00:16:30.217 "auth": { 00:16:30.217 "state": "completed", 00:16:30.217 "digest": "sha256", 00:16:30.217 "dhgroup": "ffdhe4096" 00:16:30.217 } 00:16:30.217 } 00:16:30.217 ]' 00:16:30.217 08:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.217 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.217 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.217 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.217 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.217 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.217 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.217 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.476 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:30.476 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:31.043 08:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.043 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.043 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.043 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.043 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.043 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.043 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.043 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:31.043 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:31.302 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:31.302 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.302 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.302 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:31.302 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.302 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.303 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.303 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.303 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.303 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.303 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.303 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.303 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.561 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.821 { 00:16:31.821 "cntlid": 33, 00:16:31.821 "qid": 0, 00:16:31.821 "state": "enabled", 00:16:31.821 "thread": "nvmf_tgt_poll_group_000", 00:16:31.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.821 "listen_address": { 00:16:31.821 "trtype": "RDMA", 00:16:31.821 "adrfam": "IPv4", 00:16:31.821 "traddr": "192.168.100.8", 00:16:31.821 "trsvcid": "4420" 00:16:31.821 }, 00:16:31.821 "peer_address": { 00:16:31.821 "trtype": "RDMA", 00:16:31.821 "adrfam": "IPv4", 00:16:31.821 "traddr": "192.168.100.8", 00:16:31.821 "trsvcid": "58688" 00:16:31.821 }, 00:16:31.821 "auth": { 00:16:31.821 "state": "completed", 00:16:31.821 "digest": "sha256", 00:16:31.821 "dhgroup": "ffdhe6144" 00:16:31.821 } 00:16:31.821 } 00:16:31.821 ]' 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.821 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.080 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.080 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.080 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.080 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.080 08:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.340 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:32.340 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:32.908 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.908 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.908 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.908 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.908 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.908 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.908 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.908 08:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.168 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.426 00:16:33.426 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.426 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.426 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.684 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.684 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.684 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.684 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.684 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.684 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.684 { 00:16:33.684 "cntlid": 35, 00:16:33.684 "qid": 0, 00:16:33.684 "state": "enabled", 00:16:33.684 "thread": "nvmf_tgt_poll_group_000", 00:16:33.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:33.684 "listen_address": { 00:16:33.684 "trtype": "RDMA", 00:16:33.684 "adrfam": "IPv4", 00:16:33.684 "traddr": "192.168.100.8", 00:16:33.684 "trsvcid": "4420" 00:16:33.684 }, 00:16:33.684 "peer_address": { 00:16:33.684 "trtype": "RDMA", 00:16:33.684 "adrfam": "IPv4", 00:16:33.684 "traddr": "192.168.100.8", 00:16:33.684 "trsvcid": "41463" 00:16:33.684 }, 00:16:33.684 "auth": { 00:16:33.684 "state": "completed", 00:16:33.684 "digest": "sha256", 00:16:33.684 "dhgroup": "ffdhe6144" 00:16:33.684 } 00:16:33.684 } 00:16:33.684 ]' 00:16:33.684 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.684 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.684 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.942 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.942 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.942 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.942 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.942 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.201 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:34.201 08:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:34.769 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.769 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.769 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.769 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.769 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.769 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.769 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.769 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.028 08:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.286 00:16:35.544 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.544 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.544 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.544 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.544 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.544 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.544 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.544 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.544 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.544 { 00:16:35.544 "cntlid": 37, 00:16:35.544 "qid": 0, 00:16:35.544 "state": "enabled", 00:16:35.544 "thread": "nvmf_tgt_poll_group_000", 00:16:35.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:35.544 "listen_address": { 00:16:35.544 "trtype": "RDMA", 00:16:35.544 "adrfam": "IPv4", 00:16:35.544 "traddr": "192.168.100.8", 00:16:35.544 "trsvcid": "4420" 00:16:35.544 }, 00:16:35.544 "peer_address": { 00:16:35.544 "trtype": "RDMA", 00:16:35.544 "adrfam": "IPv4", 00:16:35.544 "traddr": "192.168.100.8", 00:16:35.544 "trsvcid": "49555" 00:16:35.544 }, 00:16:35.544 "auth": { 00:16:35.544 "state": "completed", 00:16:35.544 "digest": "sha256", 00:16:35.544 "dhgroup": "ffdhe6144" 00:16:35.544 } 00:16:35.544 } 00:16:35.544 ]' 00:16:35.803 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.803 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.803 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.803 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.803 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.803 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.803 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.803 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.062 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:36.062 08:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:36.631 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.631 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:36.631 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.631 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.631 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.631 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.631 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.631 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.890 08:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.458 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.458 { 00:16:37.458 "cntlid": 39, 00:16:37.458 "qid": 0, 00:16:37.458 "state": "enabled", 00:16:37.458 "thread": "nvmf_tgt_poll_group_000", 00:16:37.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.458 "listen_address": { 00:16:37.458 "trtype": "RDMA", 00:16:37.458 "adrfam": "IPv4", 00:16:37.458 "traddr": "192.168.100.8", 00:16:37.458 "trsvcid": "4420" 00:16:37.458 }, 00:16:37.458 "peer_address": { 00:16:37.458 "trtype": "RDMA", 00:16:37.458 "adrfam": "IPv4", 00:16:37.458 "traddr": "192.168.100.8", 00:16:37.458 "trsvcid": "36192" 00:16:37.458 }, 00:16:37.458 "auth": { 00:16:37.458 "state": "completed", 00:16:37.458 "digest": "sha256", 00:16:37.458 "dhgroup": "ffdhe6144" 00:16:37.458 } 00:16:37.458 } 00:16:37.458 ]' 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.458 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.718 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.718 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.718 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.718 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.718 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:37.718 08:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.655 08:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.224 00:16:39.224 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.224 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.224 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.483 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.483 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.483 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.483 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.483 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.483 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.483 { 00:16:39.483 "cntlid": 41, 00:16:39.483 "qid": 0, 00:16:39.483 "state": "enabled", 00:16:39.483 "thread": "nvmf_tgt_poll_group_000", 00:16:39.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:39.483 "listen_address": { 00:16:39.483 "trtype": "RDMA", 00:16:39.483 "adrfam": "IPv4", 00:16:39.483 "traddr": "192.168.100.8", 00:16:39.483 "trsvcid": "4420" 00:16:39.483 }, 00:16:39.483 "peer_address": { 00:16:39.483 "trtype": "RDMA", 00:16:39.483 "adrfam": "IPv4", 00:16:39.483 "traddr": "192.168.100.8", 00:16:39.483 "trsvcid": "49526" 00:16:39.483 }, 00:16:39.483 "auth": { 00:16:39.483 "state": "completed", 00:16:39.483 "digest": "sha256", 00:16:39.483 "dhgroup": "ffdhe8192" 00:16:39.483 } 00:16:39.483 } 00:16:39.483 ]' 00:16:39.483 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.483 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.483 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.484 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.484 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.484 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.484 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.484 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.743 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:39.743 08:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:40.309 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.568 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.568 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.568 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.568 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.568 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.568 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.568 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.828 08:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.396 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.396 { 00:16:41.396 "cntlid": 43, 00:16:41.396 "qid": 0, 00:16:41.396 "state": "enabled", 00:16:41.396 "thread": "nvmf_tgt_poll_group_000", 00:16:41.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:41.396 "listen_address": { 00:16:41.396 "trtype": "RDMA", 00:16:41.396 "adrfam": "IPv4", 00:16:41.396 "traddr": "192.168.100.8", 00:16:41.396 "trsvcid": "4420" 00:16:41.396 }, 00:16:41.396 "peer_address": { 00:16:41.396 "trtype": "RDMA", 00:16:41.396 "adrfam": "IPv4", 00:16:41.396 "traddr": "192.168.100.8", 00:16:41.396 "trsvcid": "57749" 00:16:41.396 }, 00:16:41.396 "auth": { 00:16:41.396 "state": "completed", 00:16:41.396 "digest": "sha256", 00:16:41.396 "dhgroup": "ffdhe8192" 00:16:41.396 } 00:16:41.396 } 00:16:41.396 ]' 00:16:41.396 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.655 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.655 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.655 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.655 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.655 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.655 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.655 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.913 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:41.913 08:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:42.480 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.481 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:42.481 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.481 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.481 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.740 08:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.308 00:16:43.308 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.308 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.308 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.566 { 00:16:43.566 "cntlid": 45, 00:16:43.566 "qid": 0, 00:16:43.566 "state": "enabled", 00:16:43.566 "thread": "nvmf_tgt_poll_group_000", 00:16:43.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:43.566 "listen_address": { 00:16:43.566 "trtype": "RDMA", 00:16:43.566 "adrfam": "IPv4", 00:16:43.566 "traddr": "192.168.100.8", 00:16:43.566 "trsvcid": "4420" 00:16:43.566 }, 00:16:43.566 "peer_address": { 00:16:43.566 "trtype": "RDMA", 00:16:43.566 "adrfam": "IPv4", 00:16:43.566 "traddr": "192.168.100.8", 00:16:43.566 "trsvcid": "40054" 00:16:43.566 }, 00:16:43.566 "auth": { 00:16:43.566 "state": "completed", 00:16:43.566 "digest": "sha256", 00:16:43.566 "dhgroup": "ffdhe8192" 00:16:43.566 } 00:16:43.566 } 00:16:43.566 ]' 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.566 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.825 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:43.825 08:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:44.392 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.652 08:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.221 00:16:45.221 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.221 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.221 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.480 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.480 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.480 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.480 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.480 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.480 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.480 { 00:16:45.480 "cntlid": 47, 00:16:45.480 "qid": 0, 00:16:45.480 "state": "enabled", 00:16:45.480 "thread": "nvmf_tgt_poll_group_000", 00:16:45.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.481 "listen_address": { 00:16:45.481 "trtype": "RDMA", 00:16:45.481 "adrfam": "IPv4", 00:16:45.481 "traddr": "192.168.100.8", 00:16:45.481 "trsvcid": "4420" 00:16:45.481 }, 00:16:45.481 "peer_address": { 00:16:45.481 "trtype": "RDMA", 00:16:45.481 "adrfam": "IPv4", 00:16:45.481 "traddr": "192.168.100.8", 00:16:45.481 "trsvcid": "45042" 00:16:45.481 }, 00:16:45.481 "auth": { 00:16:45.481 "state": "completed", 00:16:45.481 "digest": "sha256", 00:16:45.481 "dhgroup": "ffdhe8192" 00:16:45.481 } 00:16:45.481 } 00:16:45.481 ]' 00:16:45.481 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.481 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.481 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.481 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.481 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.481 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.481 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.481 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.740 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:45.740 08:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:46.308 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.567 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.567 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.567 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.567 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.567 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:46.567 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.567 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.567 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.567 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.829 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.829 00:16:47.088 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.088 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.088 08:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.088 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.088 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.088 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.088 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.088 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.088 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.088 { 00:16:47.088 "cntlid": 49, 00:16:47.088 "qid": 0, 00:16:47.088 "state": "enabled", 00:16:47.088 "thread": "nvmf_tgt_poll_group_000", 00:16:47.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.088 "listen_address": { 00:16:47.088 "trtype": "RDMA", 00:16:47.088 "adrfam": "IPv4", 00:16:47.088 "traddr": "192.168.100.8", 00:16:47.088 "trsvcid": "4420" 00:16:47.088 }, 00:16:47.088 "peer_address": { 00:16:47.088 "trtype": "RDMA", 00:16:47.088 "adrfam": "IPv4", 00:16:47.088 "traddr": "192.168.100.8", 00:16:47.088 "trsvcid": "37866" 00:16:47.088 }, 00:16:47.088 "auth": { 00:16:47.088 "state": "completed", 00:16:47.088 "digest": "sha384", 00:16:47.088 "dhgroup": "null" 00:16:47.088 } 00:16:47.088 } 00:16:47.088 ]' 00:16:47.088 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.347 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.347 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.347 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:47.347 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.347 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.347 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.347 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.606 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:47.606 08:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:48.175 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.175 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.175 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.175 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.175 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.175 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.175 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:48.175 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.434 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.693 00:16:48.693 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.693 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.693 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.951 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.951 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.951 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.951 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.951 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.951 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.951 { 00:16:48.951 "cntlid": 51, 00:16:48.951 "qid": 0, 00:16:48.951 "state": "enabled", 00:16:48.951 "thread": "nvmf_tgt_poll_group_000", 00:16:48.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.951 "listen_address": { 00:16:48.951 "trtype": "RDMA", 00:16:48.951 "adrfam": "IPv4", 00:16:48.951 "traddr": "192.168.100.8", 00:16:48.951 "trsvcid": "4420" 00:16:48.951 }, 00:16:48.951 "peer_address": { 00:16:48.951 "trtype": "RDMA", 00:16:48.951 "adrfam": "IPv4", 00:16:48.951 "traddr": "192.168.100.8", 00:16:48.951 "trsvcid": "55401" 00:16:48.951 }, 00:16:48.951 "auth": { 00:16:48.951 "state": "completed", 00:16:48.951 "digest": "sha384", 00:16:48.951 "dhgroup": "null" 00:16:48.951 } 00:16:48.951 } 00:16:48.951 ]' 00:16:48.951 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.952 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.952 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.952 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:48.952 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.952 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.952 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.952 08:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.210 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:49.210 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:50.146 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.146 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.146 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.146 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.146 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.146 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.146 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.146 08:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.146 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.404 00:16:50.404 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.404 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.404 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.663 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.663 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.663 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.663 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.663 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.663 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.663 { 00:16:50.663 "cntlid": 53, 00:16:50.663 "qid": 0, 00:16:50.664 "state": "enabled", 00:16:50.664 "thread": "nvmf_tgt_poll_group_000", 00:16:50.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.664 "listen_address": { 00:16:50.664 "trtype": "RDMA", 00:16:50.664 "adrfam": "IPv4", 00:16:50.664 "traddr": "192.168.100.8", 00:16:50.664 "trsvcid": "4420" 00:16:50.664 }, 00:16:50.664 "peer_address": { 00:16:50.664 "trtype": "RDMA", 00:16:50.664 "adrfam": "IPv4", 00:16:50.664 "traddr": "192.168.100.8", 00:16:50.664 "trsvcid": "60052" 00:16:50.664 }, 00:16:50.664 "auth": { 00:16:50.664 "state": "completed", 00:16:50.664 "digest": "sha384", 00:16:50.664 "dhgroup": "null" 00:16:50.664 } 00:16:50.664 } 00:16:50.664 ]' 00:16:50.664 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.664 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.664 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.922 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:50.922 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.922 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.922 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.922 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.181 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:51.181 08:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:51.749 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.749 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.749 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.749 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.749 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.749 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.749 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.749 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.008 08:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.267 00:16:52.267 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.267 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.267 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.526 { 00:16:52.526 "cntlid": 55, 00:16:52.526 "qid": 0, 00:16:52.526 "state": "enabled", 00:16:52.526 "thread": "nvmf_tgt_poll_group_000", 00:16:52.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.526 "listen_address": { 00:16:52.526 "trtype": "RDMA", 00:16:52.526 "adrfam": "IPv4", 00:16:52.526 "traddr": "192.168.100.8", 00:16:52.526 "trsvcid": "4420" 00:16:52.526 }, 00:16:52.526 "peer_address": { 00:16:52.526 "trtype": "RDMA", 00:16:52.526 "adrfam": "IPv4", 00:16:52.526 "traddr": "192.168.100.8", 00:16:52.526 "trsvcid": "39217" 00:16:52.526 }, 00:16:52.526 "auth": { 00:16:52.526 "state": "completed", 00:16:52.526 "digest": "sha384", 00:16:52.526 "dhgroup": "null" 00:16:52.526 } 00:16:52.526 } 00:16:52.526 ]' 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.526 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.785 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:52.785 08:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:53.351 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.609 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.609 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.609 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.609 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.609 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.609 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.609 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.609 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.867 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.868 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.868 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.868 00:16:54.126 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.126 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.126 08:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.126 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.126 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.126 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.126 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.126 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.126 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.126 { 00:16:54.126 "cntlid": 57, 00:16:54.126 "qid": 0, 00:16:54.126 "state": "enabled", 00:16:54.126 "thread": "nvmf_tgt_poll_group_000", 00:16:54.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:54.126 "listen_address": { 00:16:54.126 "trtype": "RDMA", 00:16:54.126 "adrfam": "IPv4", 00:16:54.126 "traddr": "192.168.100.8", 00:16:54.126 "trsvcid": "4420" 00:16:54.126 }, 00:16:54.126 "peer_address": { 00:16:54.126 "trtype": "RDMA", 00:16:54.126 "adrfam": "IPv4", 00:16:54.126 "traddr": "192.168.100.8", 00:16:54.126 "trsvcid": "48783" 00:16:54.126 }, 00:16:54.126 "auth": { 00:16:54.126 "state": "completed", 00:16:54.126 "digest": "sha384", 00:16:54.126 "dhgroup": "ffdhe2048" 00:16:54.126 } 00:16:54.126 } 00:16:54.126 ]' 00:16:54.126 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.384 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.384 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.384 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.384 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.384 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.384 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.384 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.643 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:54.643 08:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:16:55.211 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.211 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.211 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.211 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.211 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.211 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.211 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.211 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.469 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.728 00:16:55.728 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.728 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.728 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.986 { 00:16:55.986 "cntlid": 59, 00:16:55.986 "qid": 0, 00:16:55.986 "state": "enabled", 00:16:55.986 "thread": "nvmf_tgt_poll_group_000", 00:16:55.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:55.986 "listen_address": { 00:16:55.986 "trtype": "RDMA", 00:16:55.986 "adrfam": "IPv4", 00:16:55.986 "traddr": "192.168.100.8", 00:16:55.986 "trsvcid": "4420" 00:16:55.986 }, 00:16:55.986 "peer_address": { 00:16:55.986 "trtype": "RDMA", 00:16:55.986 "adrfam": "IPv4", 00:16:55.986 "traddr": "192.168.100.8", 00:16:55.986 "trsvcid": "46425" 00:16:55.986 }, 00:16:55.986 "auth": { 00:16:55.986 "state": "completed", 00:16:55.986 "digest": "sha384", 00:16:55.986 "dhgroup": "ffdhe2048" 00:16:55.986 } 00:16:55.986 } 00:16:55.986 ]' 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.986 08:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.245 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:56.245 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:16:57.181 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.181 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.181 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.181 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.181 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.181 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.181 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.181 08:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.181 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:57.181 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.181 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.182 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.440 00:16:57.440 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.440 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.440 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.700 { 00:16:57.700 "cntlid": 61, 00:16:57.700 "qid": 0, 00:16:57.700 "state": "enabled", 00:16:57.700 "thread": "nvmf_tgt_poll_group_000", 00:16:57.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.700 "listen_address": { 00:16:57.700 "trtype": "RDMA", 00:16:57.700 "adrfam": "IPv4", 00:16:57.700 "traddr": "192.168.100.8", 00:16:57.700 "trsvcid": "4420" 00:16:57.700 }, 00:16:57.700 "peer_address": { 00:16:57.700 "trtype": "RDMA", 00:16:57.700 "adrfam": "IPv4", 00:16:57.700 "traddr": "192.168.100.8", 00:16:57.700 "trsvcid": "57819" 00:16:57.700 }, 00:16:57.700 "auth": { 00:16:57.700 "state": "completed", 00:16:57.700 "digest": "sha384", 00:16:57.700 "dhgroup": "ffdhe2048" 00:16:57.700 } 00:16:57.700 } 00:16:57.700 ]' 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.700 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.959 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.959 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.959 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.959 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:57.959 08:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.897 08:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.155 00:16:59.155 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.155 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.155 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.414 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.414 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.414 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.414 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.414 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.414 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.414 { 00:16:59.414 "cntlid": 63, 00:16:59.414 "qid": 0, 00:16:59.414 "state": "enabled", 00:16:59.414 "thread": "nvmf_tgt_poll_group_000", 00:16:59.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.414 "listen_address": { 00:16:59.414 "trtype": "RDMA", 00:16:59.414 "adrfam": "IPv4", 00:16:59.414 "traddr": "192.168.100.8", 00:16:59.414 "trsvcid": "4420" 00:16:59.414 }, 00:16:59.414 "peer_address": { 00:16:59.414 "trtype": "RDMA", 00:16:59.414 "adrfam": "IPv4", 00:16:59.414 "traddr": "192.168.100.8", 00:16:59.414 "trsvcid": "43068" 00:16:59.414 }, 00:16:59.414 "auth": { 00:16:59.414 "state": "completed", 00:16:59.414 "digest": "sha384", 00:16:59.414 "dhgroup": "ffdhe2048" 00:16:59.414 } 00:16:59.414 } 00:16:59.414 ]' 00:16:59.414 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.414 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.414 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.673 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.673 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.673 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.673 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.673 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.932 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:16:59.932 08:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:00.570 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.570 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.570 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.570 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.570 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.570 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.570 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.570 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.571 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.859 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:00.859 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.859 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.859 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.859 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.859 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.859 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.859 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.859 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.860 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.860 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.860 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.860 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.229 00:17:01.229 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.229 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.229 08:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.229 { 00:17:01.229 "cntlid": 65, 00:17:01.229 "qid": 0, 00:17:01.229 "state": "enabled", 00:17:01.229 "thread": "nvmf_tgt_poll_group_000", 00:17:01.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:01.229 "listen_address": { 00:17:01.229 "trtype": "RDMA", 00:17:01.229 "adrfam": "IPv4", 00:17:01.229 "traddr": "192.168.100.8", 00:17:01.229 "trsvcid": "4420" 00:17:01.229 }, 00:17:01.229 "peer_address": { 00:17:01.229 "trtype": "RDMA", 00:17:01.229 "adrfam": "IPv4", 00:17:01.229 "traddr": "192.168.100.8", 00:17:01.229 "trsvcid": "56473" 00:17:01.229 }, 00:17:01.229 "auth": { 00:17:01.229 "state": "completed", 00:17:01.229 "digest": "sha384", 00:17:01.229 "dhgroup": "ffdhe3072" 00:17:01.229 } 00:17:01.229 } 00:17:01.229 ]' 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.229 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.537 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.537 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.537 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.537 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:01.537 08:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:02.146 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.405 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.663 00:17:02.663 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.663 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.663 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.922 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.923 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.923 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.923 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.923 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.923 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.923 { 00:17:02.923 "cntlid": 67, 00:17:02.923 "qid": 0, 00:17:02.923 "state": "enabled", 00:17:02.923 "thread": "nvmf_tgt_poll_group_000", 00:17:02.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.923 "listen_address": { 00:17:02.923 "trtype": "RDMA", 00:17:02.923 "adrfam": "IPv4", 00:17:02.923 "traddr": "192.168.100.8", 00:17:02.923 "trsvcid": "4420" 00:17:02.923 }, 00:17:02.923 "peer_address": { 00:17:02.923 "trtype": "RDMA", 00:17:02.923 "adrfam": "IPv4", 00:17:02.923 "traddr": "192.168.100.8", 00:17:02.923 "trsvcid": "36700" 00:17:02.923 }, 00:17:02.923 "auth": { 00:17:02.923 "state": "completed", 00:17:02.923 "digest": "sha384", 00:17:02.923 "dhgroup": "ffdhe3072" 00:17:02.923 } 00:17:02.923 } 00:17:02.923 ]' 00:17:02.923 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.923 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.923 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.182 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.182 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.182 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.182 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.182 08:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.441 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:03.441 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:04.010 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.010 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:04.010 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.010 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.010 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.010 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.010 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.010 08:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.269 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.528 00:17:04.528 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.528 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.528 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.787 { 00:17:04.787 "cntlid": 69, 00:17:04.787 "qid": 0, 00:17:04.787 "state": "enabled", 00:17:04.787 "thread": "nvmf_tgt_poll_group_000", 00:17:04.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.787 "listen_address": { 00:17:04.787 "trtype": "RDMA", 00:17:04.787 "adrfam": "IPv4", 00:17:04.787 "traddr": "192.168.100.8", 00:17:04.787 "trsvcid": "4420" 00:17:04.787 }, 00:17:04.787 "peer_address": { 00:17:04.787 "trtype": "RDMA", 00:17:04.787 "adrfam": "IPv4", 00:17:04.787 "traddr": "192.168.100.8", 00:17:04.787 "trsvcid": "48661" 00:17:04.787 }, 00:17:04.787 "auth": { 00:17:04.787 "state": "completed", 00:17:04.787 "digest": "sha384", 00:17:04.787 "dhgroup": "ffdhe3072" 00:17:04.787 } 00:17:04.787 } 00:17:04.787 ]' 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.787 08:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.046 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:05.046 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:05.613 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.870 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.870 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.870 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.870 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.871 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.871 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.871 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.129 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:06.129 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.129 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.129 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:06.129 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.129 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.129 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:06.129 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.129 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.130 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.130 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.130 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.130 08:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.389 00:17:06.389 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.389 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.389 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.655 { 00:17:06.655 "cntlid": 71, 00:17:06.655 "qid": 0, 00:17:06.655 "state": "enabled", 00:17:06.655 "thread": "nvmf_tgt_poll_group_000", 00:17:06.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:06.655 "listen_address": { 00:17:06.655 "trtype": "RDMA", 00:17:06.655 "adrfam": "IPv4", 00:17:06.655 "traddr": "192.168.100.8", 00:17:06.655 "trsvcid": "4420" 00:17:06.655 }, 00:17:06.655 "peer_address": { 00:17:06.655 "trtype": "RDMA", 00:17:06.655 "adrfam": "IPv4", 00:17:06.655 "traddr": "192.168.100.8", 00:17:06.655 "trsvcid": "56059" 00:17:06.655 }, 00:17:06.655 "auth": { 00:17:06.655 "state": "completed", 00:17:06.655 "digest": "sha384", 00:17:06.655 "dhgroup": "ffdhe3072" 00:17:06.655 } 00:17:06.655 } 00:17:06.655 ]' 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.655 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.914 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:06.914 08:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:07.481 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.740 08:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.001 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.260 { 00:17:08.260 "cntlid": 73, 00:17:08.260 "qid": 0, 00:17:08.260 "state": "enabled", 00:17:08.260 "thread": "nvmf_tgt_poll_group_000", 00:17:08.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:08.260 "listen_address": { 00:17:08.260 "trtype": "RDMA", 00:17:08.260 "adrfam": "IPv4", 00:17:08.260 "traddr": "192.168.100.8", 00:17:08.260 "trsvcid": "4420" 00:17:08.260 }, 00:17:08.260 "peer_address": { 00:17:08.260 "trtype": "RDMA", 00:17:08.260 "adrfam": "IPv4", 00:17:08.260 "traddr": "192.168.100.8", 00:17:08.260 "trsvcid": "37089" 00:17:08.260 }, 00:17:08.260 "auth": { 00:17:08.260 "state": "completed", 00:17:08.260 "digest": "sha384", 00:17:08.260 "dhgroup": "ffdhe4096" 00:17:08.260 } 00:17:08.260 } 00:17:08.260 ]' 00:17:08.260 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.519 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.519 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.519 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.519 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.519 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.519 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.519 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.778 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:08.778 08:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:09.345 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.345 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:09.345 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.345 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.345 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.345 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.345 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.345 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.605 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.863 00:17:09.863 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.863 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.863 08:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.122 { 00:17:10.122 "cntlid": 75, 00:17:10.122 "qid": 0, 00:17:10.122 "state": "enabled", 00:17:10.122 "thread": "nvmf_tgt_poll_group_000", 00:17:10.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.122 "listen_address": { 00:17:10.122 "trtype": "RDMA", 00:17:10.122 "adrfam": "IPv4", 00:17:10.122 "traddr": "192.168.100.8", 00:17:10.122 "trsvcid": "4420" 00:17:10.122 }, 00:17:10.122 "peer_address": { 00:17:10.122 "trtype": "RDMA", 00:17:10.122 "adrfam": "IPv4", 00:17:10.122 "traddr": "192.168.100.8", 00:17:10.122 "trsvcid": "57605" 00:17:10.122 }, 00:17:10.122 "auth": { 00:17:10.122 "state": "completed", 00:17:10.122 "digest": "sha384", 00:17:10.122 "dhgroup": "ffdhe4096" 00:17:10.122 } 00:17:10.122 } 00:17:10.122 ]' 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.122 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.381 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.381 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.381 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.381 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:10.381 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:11.348 08:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.348 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.607 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.866 { 00:17:11.866 "cntlid": 77, 00:17:11.866 "qid": 0, 00:17:11.866 "state": "enabled", 00:17:11.866 "thread": "nvmf_tgt_poll_group_000", 00:17:11.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:11.866 "listen_address": { 00:17:11.866 "trtype": "RDMA", 00:17:11.866 "adrfam": "IPv4", 00:17:11.866 "traddr": "192.168.100.8", 00:17:11.866 "trsvcid": "4420" 00:17:11.866 }, 00:17:11.866 "peer_address": { 00:17:11.866 "trtype": "RDMA", 00:17:11.866 "adrfam": "IPv4", 00:17:11.866 "traddr": "192.168.100.8", 00:17:11.866 "trsvcid": "55349" 00:17:11.866 }, 00:17:11.866 "auth": { 00:17:11.866 "state": "completed", 00:17:11.866 "digest": "sha384", 00:17:11.866 "dhgroup": "ffdhe4096" 00:17:11.866 } 00:17:11.866 } 00:17:11.866 ]' 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.866 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.126 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.126 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.126 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.126 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.126 08:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.385 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:12.385 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:12.954 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.954 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.954 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.954 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.954 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.954 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.954 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.954 08:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.213 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:13.213 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.213 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.213 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:13.214 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.214 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.214 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:13.214 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.214 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.214 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.214 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.214 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.214 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.472 00:17:13.472 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.472 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.472 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.732 { 00:17:13.732 "cntlid": 79, 00:17:13.732 "qid": 0, 00:17:13.732 "state": "enabled", 00:17:13.732 "thread": "nvmf_tgt_poll_group_000", 00:17:13.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:13.732 "listen_address": { 00:17:13.732 "trtype": "RDMA", 00:17:13.732 "adrfam": "IPv4", 00:17:13.732 "traddr": "192.168.100.8", 00:17:13.732 "trsvcid": "4420" 00:17:13.732 }, 00:17:13.732 "peer_address": { 00:17:13.732 "trtype": "RDMA", 00:17:13.732 "adrfam": "IPv4", 00:17:13.732 "traddr": "192.168.100.8", 00:17:13.732 "trsvcid": "56196" 00:17:13.732 }, 00:17:13.732 "auth": { 00:17:13.732 "state": "completed", 00:17:13.732 "digest": "sha384", 00:17:13.732 "dhgroup": "ffdhe4096" 00:17:13.732 } 00:17:13.732 } 00:17:13.732 ]' 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.732 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.991 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:13.991 08:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:14.558 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.817 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.817 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.817 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.817 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.817 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.817 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.817 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:14.817 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.189 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:15.189 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.189 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.189 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:15.189 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.189 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.190 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.190 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.190 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.190 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.190 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.190 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.190 08:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.471 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.471 { 00:17:15.471 "cntlid": 81, 00:17:15.471 "qid": 0, 00:17:15.471 "state": "enabled", 00:17:15.471 "thread": "nvmf_tgt_poll_group_000", 00:17:15.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.471 "listen_address": { 00:17:15.471 "trtype": "RDMA", 00:17:15.471 "adrfam": "IPv4", 00:17:15.471 "traddr": "192.168.100.8", 00:17:15.471 "trsvcid": "4420" 00:17:15.471 }, 00:17:15.471 "peer_address": { 00:17:15.471 "trtype": "RDMA", 00:17:15.471 "adrfam": "IPv4", 00:17:15.471 "traddr": "192.168.100.8", 00:17:15.471 "trsvcid": "49532" 00:17:15.471 }, 00:17:15.471 "auth": { 00:17:15.471 "state": "completed", 00:17:15.471 "digest": "sha384", 00:17:15.471 "dhgroup": "ffdhe6144" 00:17:15.471 } 00:17:15.471 } 00:17:15.471 ]' 00:17:15.471 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.730 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.730 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.730 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.730 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.730 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.730 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.730 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.989 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:15.989 08:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:16.557 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.557 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.557 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.557 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.557 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.557 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.557 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.557 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.816 08:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.384 00:17:17.384 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.384 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.384 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.384 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.385 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.385 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.385 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.385 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.385 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.385 { 00:17:17.385 "cntlid": 83, 00:17:17.385 "qid": 0, 00:17:17.385 "state": "enabled", 00:17:17.385 "thread": "nvmf_tgt_poll_group_000", 00:17:17.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:17.385 "listen_address": { 00:17:17.385 "trtype": "RDMA", 00:17:17.385 "adrfam": "IPv4", 00:17:17.385 "traddr": "192.168.100.8", 00:17:17.385 "trsvcid": "4420" 00:17:17.385 }, 00:17:17.385 "peer_address": { 00:17:17.385 "trtype": "RDMA", 00:17:17.385 "adrfam": "IPv4", 00:17:17.385 "traddr": "192.168.100.8", 00:17:17.385 "trsvcid": "36531" 00:17:17.385 }, 00:17:17.385 "auth": { 00:17:17.385 "state": "completed", 00:17:17.385 "digest": "sha384", 00:17:17.385 "dhgroup": "ffdhe6144" 00:17:17.385 } 00:17:17.385 } 00:17:17.385 ]' 00:17:17.385 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.385 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.385 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.644 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.644 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.644 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.644 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.644 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.903 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:17.903 08:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:18.472 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.472 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:18.472 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.472 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.472 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.472 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.472 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.472 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.731 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.989 00:17:18.989 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.989 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.989 08:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.248 { 00:17:19.248 "cntlid": 85, 00:17:19.248 "qid": 0, 00:17:19.248 "state": "enabled", 00:17:19.248 "thread": "nvmf_tgt_poll_group_000", 00:17:19.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.248 "listen_address": { 00:17:19.248 "trtype": "RDMA", 00:17:19.248 "adrfam": "IPv4", 00:17:19.248 "traddr": "192.168.100.8", 00:17:19.248 "trsvcid": "4420" 00:17:19.248 }, 00:17:19.248 "peer_address": { 00:17:19.248 "trtype": "RDMA", 00:17:19.248 "adrfam": "IPv4", 00:17:19.248 "traddr": "192.168.100.8", 00:17:19.248 "trsvcid": "33691" 00:17:19.248 }, 00:17:19.248 "auth": { 00:17:19.248 "state": "completed", 00:17:19.248 "digest": "sha384", 00:17:19.248 "dhgroup": "ffdhe6144" 00:17:19.248 } 00:17:19.248 } 00:17:19.248 ]' 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.248 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.507 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.507 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.507 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.767 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:19.767 08:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:20.335 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.335 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.335 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.335 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.335 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.335 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.335 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.335 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.595 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.854 00:17:20.854 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.854 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.854 08:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.113 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.113 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.113 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.113 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.113 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.113 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.113 { 00:17:21.113 "cntlid": 87, 00:17:21.113 "qid": 0, 00:17:21.113 "state": "enabled", 00:17:21.113 "thread": "nvmf_tgt_poll_group_000", 00:17:21.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.113 "listen_address": { 00:17:21.113 "trtype": "RDMA", 00:17:21.113 "adrfam": "IPv4", 00:17:21.113 "traddr": "192.168.100.8", 00:17:21.113 "trsvcid": "4420" 00:17:21.113 }, 00:17:21.113 "peer_address": { 00:17:21.113 "trtype": "RDMA", 00:17:21.113 "adrfam": "IPv4", 00:17:21.113 "traddr": "192.168.100.8", 00:17:21.113 "trsvcid": "40319" 00:17:21.113 }, 00:17:21.113 "auth": { 00:17:21.113 "state": "completed", 00:17:21.113 "digest": "sha384", 00:17:21.113 "dhgroup": "ffdhe6144" 00:17:21.113 } 00:17:21.113 } 00:17:21.113 ]' 00:17:21.113 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.113 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.113 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.372 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.372 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.372 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.372 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.372 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.372 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:21.372 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:22.309 08:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.309 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.310 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.877 00:17:22.877 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.877 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.877 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.137 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.137 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.137 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.137 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.137 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.137 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.137 { 00:17:23.137 "cntlid": 89, 00:17:23.137 "qid": 0, 00:17:23.137 "state": "enabled", 00:17:23.137 "thread": "nvmf_tgt_poll_group_000", 00:17:23.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.137 "listen_address": { 00:17:23.137 "trtype": "RDMA", 00:17:23.137 "adrfam": "IPv4", 00:17:23.137 "traddr": "192.168.100.8", 00:17:23.137 "trsvcid": "4420" 00:17:23.137 }, 00:17:23.137 "peer_address": { 00:17:23.137 "trtype": "RDMA", 00:17:23.137 "adrfam": "IPv4", 00:17:23.137 "traddr": "192.168.100.8", 00:17:23.137 "trsvcid": "42834" 00:17:23.137 }, 00:17:23.137 "auth": { 00:17:23.137 "state": "completed", 00:17:23.137 "digest": "sha384", 00:17:23.137 "dhgroup": "ffdhe8192" 00:17:23.137 } 00:17:23.137 } 00:17:23.137 ]' 00:17:23.137 08:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.137 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.137 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.137 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.137 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.137 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.137 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.137 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.396 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:23.396 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:23.963 08:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.222 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.222 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.222 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.222 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.222 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.222 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.222 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.482 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.741 00:17:24.741 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.741 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.741 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.000 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.000 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.000 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.000 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.000 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.000 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.000 { 00:17:25.000 "cntlid": 91, 00:17:25.000 "qid": 0, 00:17:25.000 "state": "enabled", 00:17:25.000 "thread": "nvmf_tgt_poll_group_000", 00:17:25.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:25.000 "listen_address": { 00:17:25.000 "trtype": "RDMA", 00:17:25.000 "adrfam": "IPv4", 00:17:25.000 "traddr": "192.168.100.8", 00:17:25.000 "trsvcid": "4420" 00:17:25.000 }, 00:17:25.000 "peer_address": { 00:17:25.001 "trtype": "RDMA", 00:17:25.001 "adrfam": "IPv4", 00:17:25.001 "traddr": "192.168.100.8", 00:17:25.001 "trsvcid": "37917" 00:17:25.001 }, 00:17:25.001 "auth": { 00:17:25.001 "state": "completed", 00:17:25.001 "digest": "sha384", 00:17:25.001 "dhgroup": "ffdhe8192" 00:17:25.001 } 00:17:25.001 } 00:17:25.001 ]' 00:17:25.001 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.001 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.001 08:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.259 08:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.259 08:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.259 08:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.259 08:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.259 08:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.519 08:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:25.519 08:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:26.087 08:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.087 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.087 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.087 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.087 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.087 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.087 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.087 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.346 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.913 00:17:26.913 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.913 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.913 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.172 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.172 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.172 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.172 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.172 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.172 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.172 { 00:17:27.172 "cntlid": 93, 00:17:27.172 "qid": 0, 00:17:27.172 "state": "enabled", 00:17:27.172 "thread": "nvmf_tgt_poll_group_000", 00:17:27.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:27.172 "listen_address": { 00:17:27.172 "trtype": "RDMA", 00:17:27.172 "adrfam": "IPv4", 00:17:27.172 "traddr": "192.168.100.8", 00:17:27.172 "trsvcid": "4420" 00:17:27.172 }, 00:17:27.172 "peer_address": { 00:17:27.172 "trtype": "RDMA", 00:17:27.172 "adrfam": "IPv4", 00:17:27.172 "traddr": "192.168.100.8", 00:17:27.172 "trsvcid": "34086" 00:17:27.172 }, 00:17:27.172 "auth": { 00:17:27.172 "state": "completed", 00:17:27.172 "digest": "sha384", 00:17:27.172 "dhgroup": "ffdhe8192" 00:17:27.172 } 00:17:27.172 } 00:17:27.172 ]' 00:17:27.172 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.172 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.172 08:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.172 08:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.172 08:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.172 08:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.172 08:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.172 08:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.430 08:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:27.430 08:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:27.997 08:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.256 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:28.256 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.256 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.256 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.256 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.256 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.256 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.256 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.257 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.825 00:17:28.825 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.825 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.825 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.084 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.084 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.084 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.084 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.084 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.084 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.084 { 00:17:29.084 "cntlid": 95, 00:17:29.084 "qid": 0, 00:17:29.084 "state": "enabled", 00:17:29.084 "thread": "nvmf_tgt_poll_group_000", 00:17:29.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:29.084 "listen_address": { 00:17:29.084 "trtype": "RDMA", 00:17:29.084 "adrfam": "IPv4", 00:17:29.084 "traddr": "192.168.100.8", 00:17:29.084 "trsvcid": "4420" 00:17:29.084 }, 00:17:29.084 "peer_address": { 00:17:29.084 "trtype": "RDMA", 00:17:29.084 "adrfam": "IPv4", 00:17:29.084 "traddr": "192.168.100.8", 00:17:29.084 "trsvcid": "46909" 00:17:29.084 }, 00:17:29.084 "auth": { 00:17:29.084 "state": "completed", 00:17:29.084 "digest": "sha384", 00:17:29.084 "dhgroup": "ffdhe8192" 00:17:29.084 } 00:17:29.084 } 00:17:29.084 ]' 00:17:29.084 08:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.084 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.084 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.084 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.084 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.084 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.084 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.084 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.343 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:29.343 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:29.911 08:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.170 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.170 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.170 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.170 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.170 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:30.170 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.170 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.170 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.170 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.428 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:30.428 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.429 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.687 00:17:30.688 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.688 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.688 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.947 { 00:17:30.947 "cntlid": 97, 00:17:30.947 "qid": 0, 00:17:30.947 "state": "enabled", 00:17:30.947 "thread": "nvmf_tgt_poll_group_000", 00:17:30.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:30.947 "listen_address": { 00:17:30.947 "trtype": "RDMA", 00:17:30.947 "adrfam": "IPv4", 00:17:30.947 "traddr": "192.168.100.8", 00:17:30.947 "trsvcid": "4420" 00:17:30.947 }, 00:17:30.947 "peer_address": { 00:17:30.947 "trtype": "RDMA", 00:17:30.947 "adrfam": "IPv4", 00:17:30.947 "traddr": "192.168.100.8", 00:17:30.947 "trsvcid": "55213" 00:17:30.947 }, 00:17:30.947 "auth": { 00:17:30.947 "state": "completed", 00:17:30.947 "digest": "sha512", 00:17:30.947 "dhgroup": "null" 00:17:30.947 } 00:17:30.947 } 00:17:30.947 ]' 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.947 08:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.206 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:31.206 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:31.774 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.033 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.033 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.033 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.033 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.033 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.033 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.033 08:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.033 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.292 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.551 { 00:17:32.551 "cntlid": 99, 00:17:32.551 "qid": 0, 00:17:32.551 "state": "enabled", 00:17:32.551 "thread": "nvmf_tgt_poll_group_000", 00:17:32.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:32.551 "listen_address": { 00:17:32.551 "trtype": "RDMA", 00:17:32.551 "adrfam": "IPv4", 00:17:32.551 "traddr": "192.168.100.8", 00:17:32.551 "trsvcid": "4420" 00:17:32.551 }, 00:17:32.551 "peer_address": { 00:17:32.551 "trtype": "RDMA", 00:17:32.551 "adrfam": "IPv4", 00:17:32.551 "traddr": "192.168.100.8", 00:17:32.551 "trsvcid": "55914" 00:17:32.551 }, 00:17:32.551 "auth": { 00:17:32.551 "state": "completed", 00:17:32.551 "digest": "sha512", 00:17:32.551 "dhgroup": "null" 00:17:32.551 } 00:17:32.551 } 00:17:32.551 ]' 00:17:32.551 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.810 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.810 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.810 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:32.810 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.810 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.810 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.810 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.069 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:33.069 08:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:33.636 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.636 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.636 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.636 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.636 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.636 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.636 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.636 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.896 08:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.154 00:17:34.154 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.154 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.154 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.413 { 00:17:34.413 "cntlid": 101, 00:17:34.413 "qid": 0, 00:17:34.413 "state": "enabled", 00:17:34.413 "thread": "nvmf_tgt_poll_group_000", 00:17:34.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:34.413 "listen_address": { 00:17:34.413 "trtype": "RDMA", 00:17:34.413 "adrfam": "IPv4", 00:17:34.413 "traddr": "192.168.100.8", 00:17:34.413 "trsvcid": "4420" 00:17:34.413 }, 00:17:34.413 "peer_address": { 00:17:34.413 "trtype": "RDMA", 00:17:34.413 "adrfam": "IPv4", 00:17:34.413 "traddr": "192.168.100.8", 00:17:34.413 "trsvcid": "40571" 00:17:34.413 }, 00:17:34.413 "auth": { 00:17:34.413 "state": "completed", 00:17:34.413 "digest": "sha512", 00:17:34.413 "dhgroup": "null" 00:17:34.413 } 00:17:34.413 } 00:17:34.413 ]' 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.413 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.673 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:34.673 08:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:35.243 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.503 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.503 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.503 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.503 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.503 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.503 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.504 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.763 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.763 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.022 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.022 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.023 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.023 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.023 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.023 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.023 { 00:17:36.023 "cntlid": 103, 00:17:36.023 "qid": 0, 00:17:36.023 "state": "enabled", 00:17:36.023 "thread": "nvmf_tgt_poll_group_000", 00:17:36.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:36.023 "listen_address": { 00:17:36.023 "trtype": "RDMA", 00:17:36.023 "adrfam": "IPv4", 00:17:36.023 "traddr": "192.168.100.8", 00:17:36.023 "trsvcid": "4420" 00:17:36.023 }, 00:17:36.023 "peer_address": { 00:17:36.023 "trtype": "RDMA", 00:17:36.023 "adrfam": "IPv4", 00:17:36.023 "traddr": "192.168.100.8", 00:17:36.023 "trsvcid": "57407" 00:17:36.023 }, 00:17:36.023 "auth": { 00:17:36.023 "state": "completed", 00:17:36.023 "digest": "sha512", 00:17:36.023 "dhgroup": "null" 00:17:36.023 } 00:17:36.023 } 00:17:36.023 ]' 00:17:36.023 08:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.023 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.023 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.281 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:36.281 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.281 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.281 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.281 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.540 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:36.540 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:37.116 08:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.116 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:37.116 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.116 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.116 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.116 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.116 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.116 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.116 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.375 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.634 00:17:37.634 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.634 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.634 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.893 { 00:17:37.893 "cntlid": 105, 00:17:37.893 "qid": 0, 00:17:37.893 "state": "enabled", 00:17:37.893 "thread": "nvmf_tgt_poll_group_000", 00:17:37.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:37.893 "listen_address": { 00:17:37.893 "trtype": "RDMA", 00:17:37.893 "adrfam": "IPv4", 00:17:37.893 "traddr": "192.168.100.8", 00:17:37.893 "trsvcid": "4420" 00:17:37.893 }, 00:17:37.893 "peer_address": { 00:17:37.893 "trtype": "RDMA", 00:17:37.893 "adrfam": "IPv4", 00:17:37.893 "traddr": "192.168.100.8", 00:17:37.893 "trsvcid": "57936" 00:17:37.893 }, 00:17:37.893 "auth": { 00:17:37.893 "state": "completed", 00:17:37.893 "digest": "sha512", 00:17:37.893 "dhgroup": "ffdhe2048" 00:17:37.893 } 00:17:37.893 } 00:17:37.893 ]' 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.893 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.894 08:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.152 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:38.152 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:38.719 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.977 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.977 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.977 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.977 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.977 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.977 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.977 08:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.236 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.495 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.495 { 00:17:39.495 "cntlid": 107, 00:17:39.495 "qid": 0, 00:17:39.495 "state": "enabled", 00:17:39.495 "thread": "nvmf_tgt_poll_group_000", 00:17:39.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:39.495 "listen_address": { 00:17:39.495 "trtype": "RDMA", 00:17:39.495 "adrfam": "IPv4", 00:17:39.495 "traddr": "192.168.100.8", 00:17:39.495 "trsvcid": "4420" 00:17:39.495 }, 00:17:39.495 "peer_address": { 00:17:39.495 "trtype": "RDMA", 00:17:39.495 "adrfam": "IPv4", 00:17:39.495 "traddr": "192.168.100.8", 00:17:39.495 "trsvcid": "34308" 00:17:39.495 }, 00:17:39.495 "auth": { 00:17:39.495 "state": "completed", 00:17:39.495 "digest": "sha512", 00:17:39.495 "dhgroup": "ffdhe2048" 00:17:39.495 } 00:17:39.495 } 00:17:39.495 ]' 00:17:39.495 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.754 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.754 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.754 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.754 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.754 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.754 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.754 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.013 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:40.013 08:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:40.580 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.580 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:40.580 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.580 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.580 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.580 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.580 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.580 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.838 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.097 00:17:41.097 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.097 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.097 08:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.357 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.357 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.357 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.357 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.357 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.357 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.357 { 00:17:41.357 "cntlid": 109, 00:17:41.357 "qid": 0, 00:17:41.357 "state": "enabled", 00:17:41.357 "thread": "nvmf_tgt_poll_group_000", 00:17:41.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:41.357 "listen_address": { 00:17:41.357 "trtype": "RDMA", 00:17:41.357 "adrfam": "IPv4", 00:17:41.357 "traddr": "192.168.100.8", 00:17:41.357 "trsvcid": "4420" 00:17:41.357 }, 00:17:41.357 "peer_address": { 00:17:41.357 "trtype": "RDMA", 00:17:41.357 "adrfam": "IPv4", 00:17:41.357 "traddr": "192.168.100.8", 00:17:41.357 "trsvcid": "50398" 00:17:41.357 }, 00:17:41.357 "auth": { 00:17:41.357 "state": "completed", 00:17:41.357 "digest": "sha512", 00:17:41.357 "dhgroup": "ffdhe2048" 00:17:41.357 } 00:17:41.357 } 00:17:41.357 ]' 00:17:41.357 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.357 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.358 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.358 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.358 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.358 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.358 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.358 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.617 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:41.617 08:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:42.185 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.444 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.444 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.444 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.444 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.444 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.444 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:42.444 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:42.444 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.704 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.704 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.962 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.962 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.963 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.963 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.963 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.963 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.963 { 00:17:42.963 "cntlid": 111, 00:17:42.963 "qid": 0, 00:17:42.963 "state": "enabled", 00:17:42.963 "thread": "nvmf_tgt_poll_group_000", 00:17:42.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:42.963 "listen_address": { 00:17:42.963 "trtype": "RDMA", 00:17:42.963 "adrfam": "IPv4", 00:17:42.963 "traddr": "192.168.100.8", 00:17:42.963 "trsvcid": "4420" 00:17:42.963 }, 00:17:42.963 "peer_address": { 00:17:42.963 "trtype": "RDMA", 00:17:42.963 "adrfam": "IPv4", 00:17:42.963 "traddr": "192.168.100.8", 00:17:42.963 "trsvcid": "45238" 00:17:42.963 }, 00:17:42.963 "auth": { 00:17:42.963 "state": "completed", 00:17:42.963 "digest": "sha512", 00:17:42.963 "dhgroup": "ffdhe2048" 00:17:42.963 } 00:17:42.963 } 00:17:42.963 ]' 00:17:42.963 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.963 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.222 08:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.222 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.222 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.222 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.222 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.222 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.481 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:43.481 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:44.049 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.049 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.049 08:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.049 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.049 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.049 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.049 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.049 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.049 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.308 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.567 00:17:44.567 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.567 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.567 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.826 { 00:17:44.826 "cntlid": 113, 00:17:44.826 "qid": 0, 00:17:44.826 "state": "enabled", 00:17:44.826 "thread": "nvmf_tgt_poll_group_000", 00:17:44.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:44.826 "listen_address": { 00:17:44.826 "trtype": "RDMA", 00:17:44.826 "adrfam": "IPv4", 00:17:44.826 "traddr": "192.168.100.8", 00:17:44.826 "trsvcid": "4420" 00:17:44.826 }, 00:17:44.826 "peer_address": { 00:17:44.826 "trtype": "RDMA", 00:17:44.826 "adrfam": "IPv4", 00:17:44.826 "traddr": "192.168.100.8", 00:17:44.826 "trsvcid": "42004" 00:17:44.826 }, 00:17:44.826 "auth": { 00:17:44.826 "state": "completed", 00:17:44.826 "digest": "sha512", 00:17:44.826 "dhgroup": "ffdhe3072" 00:17:44.826 } 00:17:44.826 } 00:17:44.826 ]' 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.826 08:54:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.085 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:45.085 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.021 08:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.021 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.021 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.021 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.021 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.280 00:17:46.280 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.280 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.280 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.540 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.540 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.540 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.540 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.540 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.540 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.540 { 00:17:46.540 "cntlid": 115, 00:17:46.540 "qid": 0, 00:17:46.540 "state": "enabled", 00:17:46.540 "thread": "nvmf_tgt_poll_group_000", 00:17:46.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:46.540 "listen_address": { 00:17:46.540 "trtype": "RDMA", 00:17:46.540 "adrfam": "IPv4", 00:17:46.540 "traddr": "192.168.100.8", 00:17:46.540 "trsvcid": "4420" 00:17:46.540 }, 00:17:46.540 "peer_address": { 00:17:46.540 "trtype": "RDMA", 00:17:46.540 "adrfam": "IPv4", 00:17:46.540 "traddr": "192.168.100.8", 00:17:46.540 "trsvcid": "50622" 00:17:46.540 }, 00:17:46.540 "auth": { 00:17:46.540 "state": "completed", 00:17:46.540 "digest": "sha512", 00:17:46.540 "dhgroup": "ffdhe3072" 00:17:46.540 } 00:17:46.540 } 00:17:46.540 ]' 00:17:46.540 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.540 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.540 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.804 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.804 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.804 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.804 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.804 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.063 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:47.063 08:54:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:47.631 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.631 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.631 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.631 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.631 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.631 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.631 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.631 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.889 08:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.149 00:17:48.149 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.149 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.149 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.407 { 00:17:48.407 "cntlid": 117, 00:17:48.407 "qid": 0, 00:17:48.407 "state": "enabled", 00:17:48.407 "thread": "nvmf_tgt_poll_group_000", 00:17:48.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.407 "listen_address": { 00:17:48.407 "trtype": "RDMA", 00:17:48.407 "adrfam": "IPv4", 00:17:48.407 "traddr": "192.168.100.8", 00:17:48.407 "trsvcid": "4420" 00:17:48.407 }, 00:17:48.407 "peer_address": { 00:17:48.407 "trtype": "RDMA", 00:17:48.407 "adrfam": "IPv4", 00:17:48.407 "traddr": "192.168.100.8", 00:17:48.407 "trsvcid": "46146" 00:17:48.407 }, 00:17:48.407 "auth": { 00:17:48.407 "state": "completed", 00:17:48.407 "digest": "sha512", 00:17:48.407 "dhgroup": "ffdhe3072" 00:17:48.407 } 00:17:48.407 } 00:17:48.407 ]' 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.407 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.408 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.408 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.408 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.408 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.667 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:48.667 08:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:49.235 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.495 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:49.495 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.495 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.495 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.495 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.495 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.495 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.754 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.014 00:17:50.014 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.014 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.014 08:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.014 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.014 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.014 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.014 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.014 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.014 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.014 { 00:17:50.014 "cntlid": 119, 00:17:50.014 "qid": 0, 00:17:50.014 "state": "enabled", 00:17:50.014 "thread": "nvmf_tgt_poll_group_000", 00:17:50.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:50.014 "listen_address": { 00:17:50.014 "trtype": "RDMA", 00:17:50.014 "adrfam": "IPv4", 00:17:50.014 "traddr": "192.168.100.8", 00:17:50.014 "trsvcid": "4420" 00:17:50.014 }, 00:17:50.014 "peer_address": { 00:17:50.014 "trtype": "RDMA", 00:17:50.014 "adrfam": "IPv4", 00:17:50.014 "traddr": "192.168.100.8", 00:17:50.014 "trsvcid": "38370" 00:17:50.014 }, 00:17:50.014 "auth": { 00:17:50.014 "state": "completed", 00:17:50.014 "digest": "sha512", 00:17:50.014 "dhgroup": "ffdhe3072" 00:17:50.014 } 00:17:50.014 } 00:17:50.014 ]' 00:17:50.014 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.272 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.272 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.272 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.272 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.272 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.272 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.272 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.530 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:50.530 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:51.098 08:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.098 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:51.098 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.098 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.098 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.098 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.098 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.098 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.098 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.357 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.616 00:17:51.616 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.616 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.616 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.875 { 00:17:51.875 "cntlid": 121, 00:17:51.875 "qid": 0, 00:17:51.875 "state": "enabled", 00:17:51.875 "thread": "nvmf_tgt_poll_group_000", 00:17:51.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:51.875 "listen_address": { 00:17:51.875 "trtype": "RDMA", 00:17:51.875 "adrfam": "IPv4", 00:17:51.875 "traddr": "192.168.100.8", 00:17:51.875 "trsvcid": "4420" 00:17:51.875 }, 00:17:51.875 "peer_address": { 00:17:51.875 "trtype": "RDMA", 00:17:51.875 "adrfam": "IPv4", 00:17:51.875 "traddr": "192.168.100.8", 00:17:51.875 "trsvcid": "58112" 00:17:51.875 }, 00:17:51.875 "auth": { 00:17:51.875 "state": "completed", 00:17:51.875 "digest": "sha512", 00:17:51.875 "dhgroup": "ffdhe4096" 00:17:51.875 } 00:17:51.875 } 00:17:51.875 ]' 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.875 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.134 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.135 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.135 08:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.135 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:52.135 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:53.072 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.072 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.072 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.072 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.072 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.073 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.073 08:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.073 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.332 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.591 { 00:17:53.591 "cntlid": 123, 00:17:53.591 "qid": 0, 00:17:53.591 "state": "enabled", 00:17:53.591 "thread": "nvmf_tgt_poll_group_000", 00:17:53.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:53.591 "listen_address": { 00:17:53.591 "trtype": "RDMA", 00:17:53.591 "adrfam": "IPv4", 00:17:53.591 "traddr": "192.168.100.8", 00:17:53.591 "trsvcid": "4420" 00:17:53.591 }, 00:17:53.591 "peer_address": { 00:17:53.591 "trtype": "RDMA", 00:17:53.591 "adrfam": "IPv4", 00:17:53.591 "traddr": "192.168.100.8", 00:17:53.591 "trsvcid": "44586" 00:17:53.591 }, 00:17:53.591 "auth": { 00:17:53.591 "state": "completed", 00:17:53.591 "digest": "sha512", 00:17:53.591 "dhgroup": "ffdhe4096" 00:17:53.591 } 00:17:53.591 } 00:17:53.591 ]' 00:17:53.591 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.850 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.850 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.850 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.850 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.850 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.850 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.850 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.109 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:54.109 08:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:17:54.677 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.677 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.677 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.677 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.677 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.677 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.677 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:54.677 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.936 08:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.195 00:17:55.195 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.195 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.195 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.454 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.454 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.454 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.454 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.454 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.454 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.454 { 00:17:55.454 "cntlid": 125, 00:17:55.454 "qid": 0, 00:17:55.454 "state": "enabled", 00:17:55.454 "thread": "nvmf_tgt_poll_group_000", 00:17:55.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:55.454 "listen_address": { 00:17:55.454 "trtype": "RDMA", 00:17:55.454 "adrfam": "IPv4", 00:17:55.454 "traddr": "192.168.100.8", 00:17:55.454 "trsvcid": "4420" 00:17:55.454 }, 00:17:55.454 "peer_address": { 00:17:55.454 "trtype": "RDMA", 00:17:55.454 "adrfam": "IPv4", 00:17:55.454 "traddr": "192.168.100.8", 00:17:55.454 "trsvcid": "41236" 00:17:55.454 }, 00:17:55.454 "auth": { 00:17:55.454 "state": "completed", 00:17:55.454 "digest": "sha512", 00:17:55.454 "dhgroup": "ffdhe4096" 00:17:55.454 } 00:17:55.454 } 00:17:55.454 ]' 00:17:55.454 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.454 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.454 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.712 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.713 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.713 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.713 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.713 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.971 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:55.971 08:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:17:56.539 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.539 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:56.539 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.539 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.539 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.539 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.539 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.539 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.805 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.066 00:17:57.066 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.066 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.066 08:54:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.325 { 00:17:57.325 "cntlid": 127, 00:17:57.325 "qid": 0, 00:17:57.325 "state": "enabled", 00:17:57.325 "thread": "nvmf_tgt_poll_group_000", 00:17:57.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:57.325 "listen_address": { 00:17:57.325 "trtype": "RDMA", 00:17:57.325 "adrfam": "IPv4", 00:17:57.325 "traddr": "192.168.100.8", 00:17:57.325 "trsvcid": "4420" 00:17:57.325 }, 00:17:57.325 "peer_address": { 00:17:57.325 "trtype": "RDMA", 00:17:57.325 "adrfam": "IPv4", 00:17:57.325 "traddr": "192.168.100.8", 00:17:57.325 "trsvcid": "57135" 00:17:57.325 }, 00:17:57.325 "auth": { 00:17:57.325 "state": "completed", 00:17:57.325 "digest": "sha512", 00:17:57.325 "dhgroup": "ffdhe4096" 00:17:57.325 } 00:17:57.325 } 00:17:57.325 ]' 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.325 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.584 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:57.584 08:54:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:17:58.152 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.411 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.411 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.411 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.411 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.411 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.411 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.411 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.411 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.670 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.929 00:17:58.929 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.929 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.929 08:54:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.188 { 00:17:59.188 "cntlid": 129, 00:17:59.188 "qid": 0, 00:17:59.188 "state": "enabled", 00:17:59.188 "thread": "nvmf_tgt_poll_group_000", 00:17:59.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:59.188 "listen_address": { 00:17:59.188 "trtype": "RDMA", 00:17:59.188 "adrfam": "IPv4", 00:17:59.188 "traddr": "192.168.100.8", 00:17:59.188 "trsvcid": "4420" 00:17:59.188 }, 00:17:59.188 "peer_address": { 00:17:59.188 "trtype": "RDMA", 00:17:59.188 "adrfam": "IPv4", 00:17:59.188 "traddr": "192.168.100.8", 00:17:59.188 "trsvcid": "57322" 00:17:59.188 }, 00:17:59.188 "auth": { 00:17:59.188 "state": "completed", 00:17:59.188 "digest": "sha512", 00:17:59.188 "dhgroup": "ffdhe6144" 00:17:59.188 } 00:17:59.188 } 00:17:59.188 ]' 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.188 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.447 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:17:59.447 08:54:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:18:00.014 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.274 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.274 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.274 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.274 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.274 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.274 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.274 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.533 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.793 00:18:00.793 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.793 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.793 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.052 { 00:18:01.052 "cntlid": 131, 00:18:01.052 "qid": 0, 00:18:01.052 "state": "enabled", 00:18:01.052 "thread": "nvmf_tgt_poll_group_000", 00:18:01.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:01.052 "listen_address": { 00:18:01.052 "trtype": "RDMA", 00:18:01.052 "adrfam": "IPv4", 00:18:01.052 "traddr": "192.168.100.8", 00:18:01.052 "trsvcid": "4420" 00:18:01.052 }, 00:18:01.052 "peer_address": { 00:18:01.052 "trtype": "RDMA", 00:18:01.052 "adrfam": "IPv4", 00:18:01.052 "traddr": "192.168.100.8", 00:18:01.052 "trsvcid": "52620" 00:18:01.052 }, 00:18:01.052 "auth": { 00:18:01.052 "state": "completed", 00:18:01.052 "digest": "sha512", 00:18:01.052 "dhgroup": "ffdhe6144" 00:18:01.052 } 00:18:01.052 } 00:18:01.052 ]' 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.052 08:54:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.052 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.052 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.052 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.311 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:18:01.311 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:18:01.878 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.137 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.137 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.137 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.137 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.137 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.137 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.137 08:54:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.396 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.655 00:18:02.655 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.655 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.655 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.913 { 00:18:02.913 "cntlid": 133, 00:18:02.913 "qid": 0, 00:18:02.913 "state": "enabled", 00:18:02.913 "thread": "nvmf_tgt_poll_group_000", 00:18:02.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.913 "listen_address": { 00:18:02.913 "trtype": "RDMA", 00:18:02.913 "adrfam": "IPv4", 00:18:02.913 "traddr": "192.168.100.8", 00:18:02.913 "trsvcid": "4420" 00:18:02.913 }, 00:18:02.913 "peer_address": { 00:18:02.913 "trtype": "RDMA", 00:18:02.913 "adrfam": "IPv4", 00:18:02.913 "traddr": "192.168.100.8", 00:18:02.913 "trsvcid": "59331" 00:18:02.913 }, 00:18:02.913 "auth": { 00:18:02.913 "state": "completed", 00:18:02.913 "digest": "sha512", 00:18:02.913 "dhgroup": "ffdhe6144" 00:18:02.913 } 00:18:02.913 } 00:18:02.913 ]' 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.913 08:54:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.172 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:18:03.172 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:18:03.739 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.998 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.998 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.998 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.998 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.998 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.998 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.998 08:54:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.257 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.516 00:18:04.516 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.516 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.516 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.775 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.775 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.775 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.775 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.775 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.775 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.775 { 00:18:04.775 "cntlid": 135, 00:18:04.775 "qid": 0, 00:18:04.775 "state": "enabled", 00:18:04.775 "thread": "nvmf_tgt_poll_group_000", 00:18:04.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:04.775 "listen_address": { 00:18:04.776 "trtype": "RDMA", 00:18:04.776 "adrfam": "IPv4", 00:18:04.776 "traddr": "192.168.100.8", 00:18:04.776 "trsvcid": "4420" 00:18:04.776 }, 00:18:04.776 "peer_address": { 00:18:04.776 "trtype": "RDMA", 00:18:04.776 "adrfam": "IPv4", 00:18:04.776 "traddr": "192.168.100.8", 00:18:04.776 "trsvcid": "40617" 00:18:04.776 }, 00:18:04.776 "auth": { 00:18:04.776 "state": "completed", 00:18:04.776 "digest": "sha512", 00:18:04.776 "dhgroup": "ffdhe6144" 00:18:04.776 } 00:18:04.776 } 00:18:04.776 ]' 00:18:04.776 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.776 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.776 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.776 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.776 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.776 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.776 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.776 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.035 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:18:05.035 08:54:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:18:05.602 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.861 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.861 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.861 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.861 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.861 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.861 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.861 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.861 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.121 08:54:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.689 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.689 { 00:18:06.689 "cntlid": 137, 00:18:06.689 "qid": 0, 00:18:06.689 "state": "enabled", 00:18:06.689 "thread": "nvmf_tgt_poll_group_000", 00:18:06.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:06.689 "listen_address": { 00:18:06.689 "trtype": "RDMA", 00:18:06.689 "adrfam": "IPv4", 00:18:06.689 "traddr": "192.168.100.8", 00:18:06.689 "trsvcid": "4420" 00:18:06.689 }, 00:18:06.689 "peer_address": { 00:18:06.689 "trtype": "RDMA", 00:18:06.689 "adrfam": "IPv4", 00:18:06.689 "traddr": "192.168.100.8", 00:18:06.689 "trsvcid": "47204" 00:18:06.689 }, 00:18:06.689 "auth": { 00:18:06.689 "state": "completed", 00:18:06.689 "digest": "sha512", 00:18:06.689 "dhgroup": "ffdhe8192" 00:18:06.689 } 00:18:06.689 } 00:18:06.689 ]' 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.689 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.948 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.948 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.948 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.948 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.948 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.948 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:18:06.948 08:54:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:18:07.884 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.885 08:54:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.453 00:18:08.453 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.453 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.453 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.712 { 00:18:08.712 "cntlid": 139, 00:18:08.712 "qid": 0, 00:18:08.712 "state": "enabled", 00:18:08.712 "thread": "nvmf_tgt_poll_group_000", 00:18:08.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:08.712 "listen_address": { 00:18:08.712 "trtype": "RDMA", 00:18:08.712 "adrfam": "IPv4", 00:18:08.712 "traddr": "192.168.100.8", 00:18:08.712 "trsvcid": "4420" 00:18:08.712 }, 00:18:08.712 "peer_address": { 00:18:08.712 "trtype": "RDMA", 00:18:08.712 "adrfam": "IPv4", 00:18:08.712 "traddr": "192.168.100.8", 00:18:08.712 "trsvcid": "36386" 00:18:08.712 }, 00:18:08.712 "auth": { 00:18:08.712 "state": "completed", 00:18:08.712 "digest": "sha512", 00:18:08.712 "dhgroup": "ffdhe8192" 00:18:08.712 } 00:18:08.712 } 00:18:08.712 ]' 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.712 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.971 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:18:08.971 08:54:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: --dhchap-ctrl-secret DHHC-1:02:MWU0MDA2YmFmNDc2MDVkMTk2YjU1NDJlNjgzZjBmZDkxNjk3ZmQwMzAzNzE5OGU4ukSHCw==: 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.907 08:54:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.475 00:18:10.475 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.475 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.475 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.733 { 00:18:10.733 "cntlid": 141, 00:18:10.733 "qid": 0, 00:18:10.733 "state": "enabled", 00:18:10.733 "thread": "nvmf_tgt_poll_group_000", 00:18:10.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.733 "listen_address": { 00:18:10.733 "trtype": "RDMA", 00:18:10.733 "adrfam": "IPv4", 00:18:10.733 "traddr": "192.168.100.8", 00:18:10.733 "trsvcid": "4420" 00:18:10.733 }, 00:18:10.733 "peer_address": { 00:18:10.733 "trtype": "RDMA", 00:18:10.733 "adrfam": "IPv4", 00:18:10.733 "traddr": "192.168.100.8", 00:18:10.733 "trsvcid": "47576" 00:18:10.733 }, 00:18:10.733 "auth": { 00:18:10.733 "state": "completed", 00:18:10.733 "digest": "sha512", 00:18:10.733 "dhgroup": "ffdhe8192" 00:18:10.733 } 00:18:10.733 } 00:18:10.733 ]' 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.733 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.991 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:18:10.991 08:54:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:01:NjhkMmE0NzgxMDc1YTdhNzBiZjAwYmMwZDM4MWY3YmW/Qlwp: 00:18:11.559 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.818 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.818 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.818 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.818 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.818 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.818 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.818 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.077 08:54:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.335 00:18:12.335 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.335 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.335 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.594 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.594 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.594 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.594 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.594 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.594 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.594 { 00:18:12.594 "cntlid": 143, 00:18:12.594 "qid": 0, 00:18:12.594 "state": "enabled", 00:18:12.594 "thread": "nvmf_tgt_poll_group_000", 00:18:12.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.594 "listen_address": { 00:18:12.594 "trtype": "RDMA", 00:18:12.594 "adrfam": "IPv4", 00:18:12.594 "traddr": "192.168.100.8", 00:18:12.594 "trsvcid": "4420" 00:18:12.594 }, 00:18:12.594 "peer_address": { 00:18:12.594 "trtype": "RDMA", 00:18:12.594 "adrfam": "IPv4", 00:18:12.594 "traddr": "192.168.100.8", 00:18:12.594 "trsvcid": "54705" 00:18:12.594 }, 00:18:12.594 "auth": { 00:18:12.594 "state": "completed", 00:18:12.594 "digest": "sha512", 00:18:12.594 "dhgroup": "ffdhe8192" 00:18:12.594 } 00:18:12.594 } 00:18:12.594 ]' 00:18:12.594 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.594 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.594 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.853 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.853 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.853 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.853 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.853 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.112 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:18:13.112 08:54:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:13.680 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.939 08:54:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.507 00:18:14.507 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.507 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.507 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.766 { 00:18:14.766 "cntlid": 145, 00:18:14.766 "qid": 0, 00:18:14.766 "state": "enabled", 00:18:14.766 "thread": "nvmf_tgt_poll_group_000", 00:18:14.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:14.766 "listen_address": { 00:18:14.766 "trtype": "RDMA", 00:18:14.766 "adrfam": "IPv4", 00:18:14.766 "traddr": "192.168.100.8", 00:18:14.766 "trsvcid": "4420" 00:18:14.766 }, 00:18:14.766 "peer_address": { 00:18:14.766 "trtype": "RDMA", 00:18:14.766 "adrfam": "IPv4", 00:18:14.766 "traddr": "192.168.100.8", 00:18:14.766 "trsvcid": "42454" 00:18:14.766 }, 00:18:14.766 "auth": { 00:18:14.766 "state": "completed", 00:18:14.766 "digest": "sha512", 00:18:14.766 "dhgroup": "ffdhe8192" 00:18:14.766 } 00:18:14.766 } 00:18:14.766 ]' 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.766 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.025 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:18:15.025 08:54:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGJmZWVhOTljNWQxMDY2ZTA3MWU0ZWY2OTc3MzJmMDE1MTVkYjcyMDQ4MDJhZGI5H4fmPg==: --dhchap-ctrl-secret DHHC-1:03:YTEwZWQwYmE5MjQyZTIxYzgzNGE3MGQ2MzFkYmE5NzEwYmMwM2MwZDQyYTVmYzI2NmEyM2M2MTYxMDdkOGU2MTHblKY=: 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:15.689 08:54:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:16.369 request: 00:18:16.369 { 00:18:16.369 "name": "nvme0", 00:18:16.369 "trtype": "rdma", 00:18:16.369 "traddr": "192.168.100.8", 00:18:16.369 "adrfam": "ipv4", 00:18:16.369 "trsvcid": "4420", 00:18:16.369 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:16.369 "prchk_reftag": false, 00:18:16.369 "prchk_guard": false, 00:18:16.369 "hdgst": false, 00:18:16.369 "ddgst": false, 00:18:16.369 "dhchap_key": "key2", 00:18:16.369 "allow_unrecognized_csi": false, 00:18:16.369 "method": "bdev_nvme_attach_controller", 00:18:16.369 "req_id": 1 00:18:16.369 } 00:18:16.369 Got JSON-RPC error response 00:18:16.369 response: 00:18:16.369 { 00:18:16.369 "code": -5, 00:18:16.369 "message": "Input/output error" 00:18:16.369 } 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.369 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.370 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.635 request: 00:18:16.635 { 00:18:16.635 "name": "nvme0", 00:18:16.635 "trtype": "rdma", 00:18:16.635 "traddr": "192.168.100.8", 00:18:16.635 "adrfam": "ipv4", 00:18:16.635 "trsvcid": "4420", 00:18:16.635 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:16.635 "prchk_reftag": false, 00:18:16.635 "prchk_guard": false, 00:18:16.635 "hdgst": false, 00:18:16.635 "ddgst": false, 00:18:16.635 "dhchap_key": "key1", 00:18:16.635 "dhchap_ctrlr_key": "ckey2", 00:18:16.635 "allow_unrecognized_csi": false, 00:18:16.635 "method": "bdev_nvme_attach_controller", 00:18:16.635 "req_id": 1 00:18:16.635 } 00:18:16.635 Got JSON-RPC error response 00:18:16.635 response: 00:18:16.635 { 00:18:16.635 "code": -5, 00:18:16.635 "message": "Input/output error" 00:18:16.635 } 00:18:16.635 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:16.635 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:16.635 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:16.635 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.636 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:16.898 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.898 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.898 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.898 08:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.156 request: 00:18:17.156 { 00:18:17.156 "name": "nvme0", 00:18:17.156 "trtype": "rdma", 00:18:17.156 "traddr": "192.168.100.8", 00:18:17.156 "adrfam": "ipv4", 00:18:17.156 "trsvcid": "4420", 00:18:17.156 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:17.156 "prchk_reftag": false, 00:18:17.156 "prchk_guard": false, 00:18:17.156 "hdgst": false, 00:18:17.156 "ddgst": false, 00:18:17.156 "dhchap_key": "key1", 00:18:17.156 "dhchap_ctrlr_key": "ckey1", 00:18:17.156 "allow_unrecognized_csi": false, 00:18:17.156 "method": "bdev_nvme_attach_controller", 00:18:17.156 "req_id": 1 00:18:17.156 } 00:18:17.156 Got JSON-RPC error response 00:18:17.156 response: 00:18:17.156 { 00:18:17.156 "code": -5, 00:18:17.156 "message": "Input/output error" 00:18:17.156 } 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 426944 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 426944 ']' 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 426944 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.156 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 426944 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 426944' 00:18:17.415 killing process with pid 426944 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 426944 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 426944 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=451765 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 451765 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 451765 ']' 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.415 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 451765 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 451765 ']' 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.674 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.934 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.934 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:17.934 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:17.934 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.934 08:54:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.193 null0 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.JR0 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.NxL ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NxL 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kOQ 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.RZF ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RZF 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.MUP 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.PUX ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PUX 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.G2e 00:18:18.193 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.194 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.131 nvme0n1 00:18:19.131 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.131 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.131 08:54:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.131 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.131 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.131 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.131 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.131 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.131 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.131 { 00:18:19.131 "cntlid": 1, 00:18:19.131 "qid": 0, 00:18:19.131 "state": "enabled", 00:18:19.131 "thread": "nvmf_tgt_poll_group_000", 00:18:19.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:19.131 "listen_address": { 00:18:19.131 "trtype": "RDMA", 00:18:19.131 "adrfam": "IPv4", 00:18:19.131 "traddr": "192.168.100.8", 00:18:19.131 "trsvcid": "4420" 00:18:19.131 }, 00:18:19.131 "peer_address": { 00:18:19.131 "trtype": "RDMA", 00:18:19.131 "adrfam": "IPv4", 00:18:19.131 "traddr": "192.168.100.8", 00:18:19.131 "trsvcid": "37259" 00:18:19.131 }, 00:18:19.131 "auth": { 00:18:19.131 "state": "completed", 00:18:19.131 "digest": "sha512", 00:18:19.131 "dhgroup": "ffdhe8192" 00:18:19.131 } 00:18:19.131 } 00:18:19.131 ]' 00:18:19.131 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.131 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.131 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.391 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.391 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.391 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.391 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.391 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.391 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:18:19.391 08:54:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:20.327 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.586 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.845 request: 00:18:20.845 { 00:18:20.845 "name": "nvme0", 00:18:20.845 "trtype": "rdma", 00:18:20.845 "traddr": "192.168.100.8", 00:18:20.845 "adrfam": "ipv4", 00:18:20.845 "trsvcid": "4420", 00:18:20.845 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:20.845 "prchk_reftag": false, 00:18:20.845 "prchk_guard": false, 00:18:20.845 "hdgst": false, 00:18:20.845 "ddgst": false, 00:18:20.845 "dhchap_key": "key3", 00:18:20.845 "allow_unrecognized_csi": false, 00:18:20.845 "method": "bdev_nvme_attach_controller", 00:18:20.845 "req_id": 1 00:18:20.845 } 00:18:20.845 Got JSON-RPC error response 00:18:20.845 response: 00:18:20.845 { 00:18:20.845 "code": -5, 00:18:20.845 "message": "Input/output error" 00:18:20.845 } 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.845 08:54:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.105 request: 00:18:21.105 { 00:18:21.105 "name": "nvme0", 00:18:21.105 "trtype": "rdma", 00:18:21.105 "traddr": "192.168.100.8", 00:18:21.105 "adrfam": "ipv4", 00:18:21.105 "trsvcid": "4420", 00:18:21.105 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:21.105 "prchk_reftag": false, 00:18:21.105 "prchk_guard": false, 00:18:21.105 "hdgst": false, 00:18:21.105 "ddgst": false, 00:18:21.105 "dhchap_key": "key3", 00:18:21.105 "allow_unrecognized_csi": false, 00:18:21.105 "method": "bdev_nvme_attach_controller", 00:18:21.105 "req_id": 1 00:18:21.105 } 00:18:21.105 Got JSON-RPC error response 00:18:21.105 response: 00:18:21.105 { 00:18:21.105 "code": -5, 00:18:21.105 "message": "Input/output error" 00:18:21.105 } 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.105 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.364 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.933 request: 00:18:21.933 { 00:18:21.933 "name": "nvme0", 00:18:21.933 "trtype": "rdma", 00:18:21.933 "traddr": "192.168.100.8", 00:18:21.933 "adrfam": "ipv4", 00:18:21.933 "trsvcid": "4420", 00:18:21.933 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:21.933 "prchk_reftag": false, 00:18:21.933 "prchk_guard": false, 00:18:21.933 "hdgst": false, 00:18:21.933 "ddgst": false, 00:18:21.933 "dhchap_key": "key0", 00:18:21.933 "dhchap_ctrlr_key": "key1", 00:18:21.933 "allow_unrecognized_csi": false, 00:18:21.933 "method": "bdev_nvme_attach_controller", 00:18:21.933 "req_id": 1 00:18:21.933 } 00:18:21.933 Got JSON-RPC error response 00:18:21.933 response: 00:18:21.933 { 00:18:21.933 "code": -5, 00:18:21.933 "message": "Input/output error" 00:18:21.933 } 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:21.933 nvme0n1 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.933 08:54:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:22.192 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.192 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.192 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.451 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:22.451 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.451 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.451 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.451 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:22.451 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:22.451 08:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:23.387 nvme0n1 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.387 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:23.647 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.647 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:18:23.647 08:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: --dhchap-ctrl-secret DHHC-1:03:NDAwMGVjNzI4MjFiZDU5ZDY3NjQwNjE2OTVhMDYwMWMxMGY0NWJiZmI5ZGM5YTc0YzZiZGNiYjIzMmNmN2UyOKTgtWk=: 00:18:24.215 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:24.215 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:24.215 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:24.215 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:24.215 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:24.215 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:24.215 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:24.215 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.215 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:24.474 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:25.040 request: 00:18:25.040 { 00:18:25.040 "name": "nvme0", 00:18:25.040 "trtype": "rdma", 00:18:25.040 "traddr": "192.168.100.8", 00:18:25.040 "adrfam": "ipv4", 00:18:25.041 "trsvcid": "4420", 00:18:25.041 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:25.041 "prchk_reftag": false, 00:18:25.041 "prchk_guard": false, 00:18:25.041 "hdgst": false, 00:18:25.041 "ddgst": false, 00:18:25.041 "dhchap_key": "key1", 00:18:25.041 "allow_unrecognized_csi": false, 00:18:25.041 "method": "bdev_nvme_attach_controller", 00:18:25.041 "req_id": 1 00:18:25.041 } 00:18:25.041 Got JSON-RPC error response 00:18:25.041 response: 00:18:25.041 { 00:18:25.041 "code": -5, 00:18:25.041 "message": "Input/output error" 00:18:25.041 } 00:18:25.041 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:25.041 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.041 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.041 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.041 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.041 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.041 08:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.608 nvme0n1 00:18:25.608 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:25.608 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:25.608 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.867 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.867 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.867 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.126 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.126 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.126 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.126 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.126 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:26.126 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:26.126 08:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:26.385 nvme0n1 00:18:26.385 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:26.385 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:26.385 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.644 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.644 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: '' 2s 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: ]] 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDdlNWNjNjU3MzdlN2RmYWFiODIxZGE1N2IwNDVlZWPXixpc: 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:26.645 08:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: 2s 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: ]] 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZWYyMjc3NWQyYTU1MWViYmZiMDJkMGRkYzk4MzY5OGYwYzk3N2FjYWNhYWU4OTFkyK1rhA==: 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:29.179 08:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.082 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.083 08:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.649 nvme0n1 00:18:31.649 08:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.649 08:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.650 08:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.650 08:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.650 08:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.650 08:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.217 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:32.217 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:32.217 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:32.476 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.736 08:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:33.305 request: 00:18:33.305 { 00:18:33.305 "name": "nvme0", 00:18:33.305 "dhchap_key": "key1", 00:18:33.305 "dhchap_ctrlr_key": "key3", 00:18:33.305 "method": "bdev_nvme_set_keys", 00:18:33.305 "req_id": 1 00:18:33.305 } 00:18:33.305 Got JSON-RPC error response 00:18:33.305 response: 00:18:33.305 { 00:18:33.305 "code": -13, 00:18:33.305 "message": "Permission denied" 00:18:33.305 } 00:18:33.305 08:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:33.305 08:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.305 08:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.305 08:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.305 08:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:33.305 08:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.305 08:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:33.564 08:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:33.564 08:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:34.501 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:34.501 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:34.501 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.760 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:34.760 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.760 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.760 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.760 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.760 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:34.760 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:34.760 08:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.328 nvme0n1 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:35.328 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:35.896 request: 00:18:35.896 { 00:18:35.896 "name": "nvme0", 00:18:35.896 "dhchap_key": "key2", 00:18:35.896 "dhchap_ctrlr_key": "key0", 00:18:35.896 "method": "bdev_nvme_set_keys", 00:18:35.896 "req_id": 1 00:18:35.896 } 00:18:35.896 Got JSON-RPC error response 00:18:35.896 response: 00:18:35.896 { 00:18:35.896 "code": -13, 00:18:35.896 "message": "Permission denied" 00:18:35.896 } 00:18:35.896 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:35.896 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:35.896 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:35.896 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:35.896 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:35.896 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:35.896 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.154 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:36.154 08:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:37.089 08:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:37.089 08:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:37.089 08:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 427106 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 427106 ']' 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 427106 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 427106 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 427106' 00:18:37.347 killing process with pid 427106 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 427106 00:18:37.347 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 427106 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:37.606 rmmod nvme_rdma 00:18:37.606 rmmod nvme_fabrics 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 451765 ']' 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 451765 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 451765 ']' 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 451765 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.606 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451765 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451765' 00:18:37.865 killing process with pid 451765 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 451765 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 451765 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.JR0 /tmp/spdk.key-sha256.kOQ /tmp/spdk.key-sha384.MUP /tmp/spdk.key-sha512.G2e /tmp/spdk.key-sha512.NxL /tmp/spdk.key-sha384.RZF /tmp/spdk.key-sha256.PUX '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:18:37.865 00:18:37.865 real 2m46.698s 00:18:37.865 user 6m26.224s 00:18:37.865 sys 0m20.886s 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.865 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.865 ************************************ 00:18:37.865 END TEST nvmf_auth_target 00:18:37.865 ************************************ 00:18:38.125 08:55:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:18:38.125 08:55:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:18:38.126 08:55:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:18:38.126 08:55:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:18:38.126 08:55:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:18:38.126 08:55:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:38.126 08:55:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:38.126 08:55:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:38.126 08:55:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:38.126 ************************************ 00:18:38.126 START TEST nvmf_srq_overwhelm 00:18:38.126 ************************************ 00:18:38.126 08:55:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:38.126 * Looking for test storage... 00:18:38.126 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1689 -- # lcov --version 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.126 --rc genhtml_branch_coverage=1 00:18:38.126 --rc genhtml_function_coverage=1 00:18:38.126 --rc genhtml_legend=1 00:18:38.126 --rc geninfo_all_blocks=1 00:18:38.126 --rc geninfo_unexecuted_blocks=1 00:18:38.126 00:18:38.126 ' 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.126 --rc genhtml_branch_coverage=1 00:18:38.126 --rc genhtml_function_coverage=1 00:18:38.126 --rc genhtml_legend=1 00:18:38.126 --rc geninfo_all_blocks=1 00:18:38.126 --rc geninfo_unexecuted_blocks=1 00:18:38.126 00:18:38.126 ' 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.126 --rc genhtml_branch_coverage=1 00:18:38.126 --rc genhtml_function_coverage=1 00:18:38.126 --rc genhtml_legend=1 00:18:38.126 --rc geninfo_all_blocks=1 00:18:38.126 --rc geninfo_unexecuted_blocks=1 00:18:38.126 00:18:38.126 ' 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.126 --rc genhtml_branch_coverage=1 00:18:38.126 --rc genhtml_function_coverage=1 00:18:38.126 --rc genhtml_legend=1 00:18:38.126 --rc geninfo_all_blocks=1 00:18:38.126 --rc geninfo_unexecuted_blocks=1 00:18:38.126 00:18:38.126 ' 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.126 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:38.127 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:18:38.127 08:55:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:44.699 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:44.699 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:44.699 Found net devices under 0000:da:00.0: mlx_0_0 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:44.699 Found net devices under 0000:da:00.1: mlx_0_1 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # is_hw=yes 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # rdma_device_init 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.699 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:44.700 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.700 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:18:44.700 altname enp218s0f0np0 00:18:44.700 altname ens818f0np0 00:18:44.700 inet 192.168.100.8/24 scope global mlx_0_0 00:18:44.700 valid_lft forever preferred_lft forever 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:44.700 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.700 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:18:44.700 altname enp218s0f1np1 00:18:44.700 altname ens818f1np1 00:18:44.700 inet 192.168.100.9/24 scope global mlx_0_1 00:18:44.700 valid_lft forever preferred_lft forever 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # return 0 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:44.700 192.168.100.9' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:44.700 192.168.100.9' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # head -n 1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:44.700 192.168.100.9' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # tail -n +2 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # head -n 1 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:44.700 08:55:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # nvmfpid=458416 00:18:44.700 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # waitforlisten 458416 00:18:44.700 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:44.700 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 458416 ']' 00:18:44.700 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.700 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.700 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.700 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.700 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:44.700 [2024-11-06 08:55:07.046423] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:18:44.700 [2024-11-06 08:55:07.046472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.700 [2024-11-06 08:55:07.121356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.700 [2024-11-06 08:55:07.165304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.701 [2024-11-06 08:55:07.165340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.701 [2024-11-06 08:55:07.165347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.701 [2024-11-06 08:55:07.165353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.701 [2024-11-06 08:55:07.165358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.701 [2024-11-06 08:55:07.166974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.701 [2024-11-06 08:55:07.167082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.701 [2024-11-06 08:55:07.167188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.701 [2024-11-06 08:55:07.167189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:44.701 [2024-11-06 08:55:07.323991] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14e3da0/0x14e8290) succeed. 00:18:44.701 [2024-11-06 08:55:07.332962] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14e5430/0x1529930) succeed. 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:44.701 Malloc0 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:44.701 [2024-11-06 08:55:07.438656] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.701 08:55:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:45.638 Malloc1 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.638 08:55:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:46.575 Malloc2 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.575 08:55:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.509 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:47.768 Malloc3 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.768 08:55:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.704 Malloc4 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.704 08:55:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:49.641 Malloc5 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.641 08:55:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:18:51.015 08:55:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:18:51.015 08:55:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:51.015 08:55:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:51.015 08:55:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:18:51.015 08:55:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:51.015 08:55:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:18:51.016 08:55:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:51.016 08:55:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:18:51.016 [global] 00:18:51.016 thread=1 00:18:51.016 invalidate=1 00:18:51.016 rw=read 00:18:51.016 time_based=1 00:18:51.016 runtime=10 00:18:51.016 ioengine=libaio 00:18:51.016 direct=1 00:18:51.016 bs=1048576 00:18:51.016 iodepth=128 00:18:51.016 norandommap=1 00:18:51.016 numjobs=13 00:18:51.016 00:18:51.016 [job0] 00:18:51.016 filename=/dev/nvme0n1 00:18:51.016 [job1] 00:18:51.016 filename=/dev/nvme1n1 00:18:51.016 [job2] 00:18:51.016 filename=/dev/nvme2n1 00:18:51.016 [job3] 00:18:51.016 filename=/dev/nvme3n1 00:18:51.016 [job4] 00:18:51.016 filename=/dev/nvme4n1 00:18:51.016 [job5] 00:18:51.016 filename=/dev/nvme5n1 00:18:51.016 Could not set queue depth (nvme0n1) 00:18:51.016 Could not set queue depth (nvme1n1) 00:18:51.016 Could not set queue depth (nvme2n1) 00:18:51.016 Could not set queue depth (nvme3n1) 00:18:51.016 Could not set queue depth (nvme4n1) 00:18:51.016 Could not set queue depth (nvme5n1) 00:18:51.274 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:51.274 ... 00:18:51.274 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:51.274 ... 00:18:51.274 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:51.274 ... 00:18:51.274 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:51.274 ... 00:18:51.274 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:51.274 ... 00:18:51.274 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:51.274 ... 00:18:51.274 fio-3.35 00:18:51.274 Starting 78 threads 00:19:06.258 00:19:06.258 job0: (groupid=0, jobs=1): err= 0: pid=459818: Wed Nov 6 08:55:27 2024 00:19:06.258 read: IOPS=44, BW=44.7MiB/s (46.9MB/s)(580MiB/12964msec) 00:19:06.258 slat (usec): min=423, max=2142.8k, avg=18706.45, stdev=151717.58 00:19:06.258 clat (msec): min=405, max=9054, avg=2727.65, stdev=3235.78 00:19:06.258 lat (msec): min=407, max=9056, avg=2746.36, stdev=3243.11 00:19:06.258 clat percentiles (msec): 00:19:06.258 | 1.00th=[ 414], 5.00th=[ 418], 10.00th=[ 439], 20.00th=[ 472], 00:19:06.258 | 30.00th=[ 718], 40.00th=[ 1020], 50.00th=[ 1318], 60.00th=[ 1586], 00:19:06.258 | 70.00th=[ 1703], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 8926], 00:19:06.258 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:19:06.258 | 99.99th=[ 9060] 00:19:06.258 bw ( KiB/s): min= 2048, max=287294, per=3.34%, avg=92819.90, stdev=93360.57, samples=10 00:19:06.258 iops : min= 2, max= 280, avg=90.50, stdev=91.08, samples=10 00:19:06.258 lat (msec) : 500=21.55%, 750=8.79%, 1000=9.31%, 2000=37.76%, >=2000=22.59% 00:19:06.258 cpu : usr=0.01%, sys=1.09%, ctx=1303, majf=0, minf=32769 00:19:06.258 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:19:06.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.258 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:06.258 issued rwts: total=580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.258 job0: (groupid=0, jobs=1): err= 0: pid=459819: Wed Nov 6 08:55:27 2024 00:19:06.258 read: IOPS=37, BW=37.8MiB/s (39.7MB/s)(488MiB/12898msec) 00:19:06.258 slat (usec): min=42, max=2108.4k, avg=22076.20, stdev=163983.82 00:19:06.258 clat (msec): min=764, max=9221, avg=3135.58, stdev=3309.56 00:19:06.258 lat (msec): min=767, max=9265, avg=3157.66, stdev=3316.63 00:19:06.258 clat percentiles (msec): 00:19:06.258 | 1.00th=[ 768], 5.00th=[ 768], 10.00th=[ 785], 20.00th=[ 818], 00:19:06.258 | 30.00th=[ 860], 40.00th=[ 986], 50.00th=[ 1502], 60.00th=[ 1620], 00:19:06.258 | 70.00th=[ 1854], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 9060], 00:19:06.258 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:19:06.258 | 99.99th=[ 9194] 00:19:06.258 bw ( KiB/s): min= 2048, max=169984, per=2.66%, avg=73926.60, stdev=68986.70, samples=10 00:19:06.258 iops : min= 2, max= 166, avg=72.10, stdev=67.43, samples=10 00:19:06.258 lat (msec) : 1000=40.57%, 2000=30.74%, >=2000=28.69% 00:19:06.258 cpu : usr=0.02%, sys=0.89%, ctx=764, majf=0, minf=32769 00:19:06.258 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.6%, >=64=87.1% 00:19:06.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.258 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.258 issued rwts: total=488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.258 job0: (groupid=0, jobs=1): err= 0: pid=459820: Wed Nov 6 08:55:27 2024 00:19:06.258 read: IOPS=24, BW=24.1MiB/s (25.3MB/s)(311MiB/12890msec) 00:19:06.258 slat (usec): min=379, max=4287.7k, avg=34676.59, stdev=293589.64 00:19:06.258 clat (msec): min=565, max=11813, avg=5096.25, stdev=5113.33 00:19:06.258 lat (msec): min=569, max=11814, avg=5130.92, stdev=5121.80 00:19:06.258 clat percentiles (msec): 00:19:06.258 | 1.00th=[ 567], 5.00th=[ 592], 10.00th=[ 651], 20.00th=[ 667], 00:19:06.258 | 30.00th=[ 684], 40.00th=[ 793], 50.00th=[ 1083], 60.00th=[ 8557], 00:19:06.258 | 70.00th=[11073], 80.00th=[11476], 90.00th=[11610], 95.00th=[11745], 00:19:06.258 | 99.00th=[11745], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:19:06.258 | 99.99th=[11879] 00:19:06.258 bw ( KiB/s): min= 2052, max=167936, per=1.94%, avg=53812.00, stdev=60947.32, samples=7 00:19:06.258 iops : min= 2, max= 164, avg=52.43, stdev=59.47, samples=7 00:19:06.258 lat (msec) : 750=37.62%, 1000=9.00%, 2000=9.65%, >=2000=43.73% 00:19:06.258 cpu : usr=0.03%, sys=0.66%, ctx=687, majf=0, minf=32769 00:19:06.258 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.3%, >=64=79.7% 00:19:06.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.258 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:06.258 issued rwts: total=311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.258 job0: (groupid=0, jobs=1): err= 0: pid=459821: Wed Nov 6 08:55:27 2024 00:19:06.258 read: IOPS=220, BW=220MiB/s (231MB/s)(2846MiB/12909msec) 00:19:06.258 slat (usec): min=42, max=2124.5k, avg=3792.14, stdev=56605.90 00:19:06.258 clat (msec): min=93, max=6567, avg=529.22, stdev=1336.72 00:19:06.258 lat (msec): min=93, max=6568, avg=533.01, stdev=1341.52 00:19:06.258 clat percentiles (msec): 00:19:06.258 | 1.00th=[ 94], 5.00th=[ 94], 10.00th=[ 94], 20.00th=[ 95], 00:19:06.258 | 30.00th=[ 95], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 96], 00:19:06.258 | 70.00th=[ 97], 80.00th=[ 489], 90.00th=[ 1133], 95.00th=[ 1636], 00:19:06.258 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:19:06.258 | 99.99th=[ 6544] 00:19:06.258 bw ( KiB/s): min= 2052, max=1370112, per=15.41%, avg=428338.54, stdev=548839.13, samples=13 00:19:06.258 iops : min= 2, max= 1338, avg=418.15, stdev=536.06, samples=13 00:19:06.258 lat (msec) : 100=73.82%, 250=3.34%, 500=3.16%, 750=6.04%, 1000=2.14% 00:19:06.258 lat (msec) : 2000=6.96%, >=2000=4.53% 00:19:06.258 cpu : usr=0.02%, sys=1.84%, ctx=3771, majf=0, minf=32769 00:19:06.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:19:06.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.258 issued rwts: total=2846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.258 job0: (groupid=0, jobs=1): err= 0: pid=459822: Wed Nov 6 08:55:27 2024 00:19:06.258 read: IOPS=22, BW=22.8MiB/s (23.9MB/s)(297MiB/13003msec) 00:19:06.258 slat (usec): min=132, max=2096.8k, avg=36609.58, stdev=231924.05 00:19:06.258 clat (msec): min=1585, max=10927, avg=5256.16, stdev=3137.79 00:19:06.258 lat (msec): min=1592, max=10933, avg=5292.77, stdev=3144.18 00:19:06.258 clat percentiles (msec): 00:19:06.258 | 1.00th=[ 1603], 5.00th=[ 1653], 10.00th=[ 1703], 20.00th=[ 1770], 00:19:06.258 | 30.00th=[ 2970], 40.00th=[ 3306], 50.00th=[ 3574], 60.00th=[ 8154], 00:19:06.258 | 70.00th=[ 8221], 80.00th=[ 8356], 90.00th=[ 8490], 95.00th=[10805], 00:19:06.258 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:19:06.258 | 99.99th=[10939] 00:19:06.259 bw ( KiB/s): min= 2052, max=176128, per=1.79%, avg=49737.71, stdev=62820.20, samples=7 00:19:06.259 iops : min= 2, max= 172, avg=48.57, stdev=61.35, samples=7 00:19:06.259 lat (msec) : 2000=22.56%, >=2000=77.44% 00:19:06.259 cpu : usr=0.01%, sys=0.85%, ctx=689, majf=0, minf=32769 00:19:06.259 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.8% 00:19:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.259 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:06.259 issued rwts: total=297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.259 job0: (groupid=0, jobs=1): err= 0: pid=459823: Wed Nov 6 08:55:27 2024 00:19:06.259 read: IOPS=22, BW=22.2MiB/s (23.2MB/s)(286MiB/12901msec) 00:19:06.259 slat (usec): min=81, max=2095.6k, avg=37705.91, stdev=248824.50 00:19:06.259 clat (msec): min=653, max=11987, avg=5563.83, stdev=5205.58 00:19:06.259 lat (msec): min=654, max=11989, avg=5601.53, stdev=5212.73 00:19:06.259 clat percentiles (msec): 00:19:06.259 | 1.00th=[ 651], 5.00th=[ 659], 10.00th=[ 659], 20.00th=[ 667], 00:19:06.259 | 30.00th=[ 676], 40.00th=[ 726], 50.00th=[ 1418], 60.00th=[ 8557], 00:19:06.259 | 70.00th=[11610], 80.00th=[11745], 90.00th=[11879], 95.00th=[11879], 00:19:06.259 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:06.259 | 99.99th=[12013] 00:19:06.259 bw ( KiB/s): min= 2052, max=181908, per=1.46%, avg=40659.00, stdev=67118.08, samples=8 00:19:06.259 iops : min= 2, max= 177, avg=39.63, stdev=65.35, samples=8 00:19:06.259 lat (msec) : 750=47.20%, 1000=2.10%, 2000=0.70%, >=2000=50.00% 00:19:06.259 cpu : usr=0.01%, sys=0.82%, ctx=258, majf=0, minf=32769 00:19:06.259 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=78.0% 00:19:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.259 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:06.259 issued rwts: total=286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.259 job0: (groupid=0, jobs=1): err= 0: pid=459824: Wed Nov 6 08:55:27 2024 00:19:06.259 read: IOPS=17, BW=17.7MiB/s (18.5MB/s)(228MiB/12904msec) 00:19:06.259 slat (usec): min=44, max=3998.8k, avg=47360.73, stdev=327646.25 00:19:06.259 clat (msec): min=1216, max=11520, avg=6724.86, stdev=4599.49 00:19:06.259 lat (msec): min=1239, max=11553, avg=6772.22, stdev=4593.86 00:19:06.259 clat percentiles (msec): 00:19:06.259 | 1.00th=[ 1234], 5.00th=[ 1267], 10.00th=[ 1318], 20.00th=[ 1435], 00:19:06.259 | 30.00th=[ 1469], 40.00th=[ 2106], 50.00th=[10537], 60.00th=[10805], 00:19:06.259 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11342], 95.00th=[11476], 00:19:06.259 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:19:06.259 | 99.99th=[11476] 00:19:06.259 bw ( KiB/s): min= 2048, max=100151, per=1.24%, avg=34441.83, stdev=45743.96, samples=6 00:19:06.259 iops : min= 2, max= 97, avg=33.50, stdev=44.44, samples=6 00:19:06.259 lat (msec) : 2000=39.91%, >=2000=60.09% 00:19:06.259 cpu : usr=0.02%, sys=0.78%, ctx=510, majf=0, minf=32769 00:19:06.259 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.0%, 32=14.0%, >=64=72.4% 00:19:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.259 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:19:06.259 issued rwts: total=228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.259 job0: (groupid=0, jobs=1): err= 0: pid=459825: Wed Nov 6 08:55:27 2024 00:19:06.259 read: IOPS=4, BW=4908KiB/s (5026kB/s)(62.0MiB/12935msec) 00:19:06.259 slat (usec): min=643, max=2095.5k, avg=174489.58, stdev=565519.42 00:19:06.259 clat (msec): min=2116, max=12933, avg=11236.09, stdev=2754.52 00:19:06.259 lat (msec): min=4211, max=12934, avg=11410.58, stdev=2498.06 00:19:06.259 clat percentiles (msec): 00:19:06.259 | 1.00th=[ 2123], 5.00th=[ 6342], 10.00th=[ 6409], 20.00th=[ 8557], 00:19:06.259 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:19:06.259 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.259 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.259 | 99.99th=[12953] 00:19:06.259 lat (msec) : >=2000=100.00% 00:19:06.259 cpu : usr=0.00%, sys=0.36%, ctx=89, majf=0, minf=15873 00:19:06.259 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:19:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.259 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.259 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.259 job0: (groupid=0, jobs=1): err= 0: pid=459826: Wed Nov 6 08:55:27 2024 00:19:06.259 read: IOPS=60, BW=60.7MiB/s (63.6MB/s)(788MiB/12985msec) 00:19:06.259 slat (usec): min=37, max=2158.3k, avg=13773.34, stdev=130824.33 00:19:06.259 clat (msec): min=399, max=9001, avg=1901.30, stdev=2988.08 00:19:06.259 lat (msec): min=401, max=9003, avg=1915.07, stdev=2997.13 00:19:06.259 clat percentiles (msec): 00:19:06.259 | 1.00th=[ 401], 5.00th=[ 405], 10.00th=[ 405], 20.00th=[ 405], 00:19:06.259 | 30.00th=[ 409], 40.00th=[ 414], 50.00th=[ 418], 60.00th=[ 422], 00:19:06.259 | 70.00th=[ 793], 80.00th=[ 1770], 90.00th=[ 8792], 95.00th=[ 8926], 00:19:06.259 | 99.00th=[ 8926], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:19:06.259 | 99.99th=[ 9060] 00:19:06.259 bw ( KiB/s): min= 2052, max=319488, per=6.08%, avg=169137.25, stdev=145786.26, samples=8 00:19:06.259 iops : min= 2, max= 312, avg=165.12, stdev=142.31, samples=8 00:19:06.259 lat (msec) : 500=65.61%, 750=3.81%, 1000=3.17%, 2000=8.76%, >=2000=18.65% 00:19:06.259 cpu : usr=0.04%, sys=1.19%, ctx=980, majf=0, minf=32769 00:19:06.259 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:19:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.259 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:06.259 issued rwts: total=788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.259 job0: (groupid=0, jobs=1): err= 0: pid=459827: Wed Nov 6 08:55:27 2024 00:19:06.259 read: IOPS=68, BW=68.7MiB/s (72.1MB/s)(890MiB/12949msec) 00:19:06.259 slat (usec): min=35, max=2102.1k, avg=12175.29, stdev=107656.27 00:19:06.259 clat (msec): min=504, max=8515, avg=1754.76, stdev=1822.51 00:19:06.259 lat (msec): min=505, max=8547, avg=1766.94, stdev=1826.55 00:19:06.259 clat percentiles (msec): 00:19:06.259 | 1.00th=[ 506], 5.00th=[ 514], 10.00th=[ 550], 20.00th=[ 617], 00:19:06.259 | 30.00th=[ 651], 40.00th=[ 751], 50.00th=[ 802], 60.00th=[ 860], 00:19:06.259 | 70.00th=[ 1070], 80.00th=[ 2970], 90.00th=[ 5671], 95.00th=[ 5873], 00:19:06.259 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 8490], 99.95th=[ 8490], 00:19:06.259 | 99.99th=[ 8490] 00:19:06.259 bw ( KiB/s): min= 2052, max=253952, per=5.11%, avg=141992.36, stdev=81174.35, samples=11 00:19:06.259 iops : min= 2, max= 248, avg=138.55, stdev=79.22, samples=11 00:19:06.259 lat (msec) : 750=39.66%, 1000=29.66%, 2000=2.25%, >=2000=28.43% 00:19:06.259 cpu : usr=0.01%, sys=1.15%, ctx=888, majf=0, minf=32769 00:19:06.259 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:19:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.259 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.259 issued rwts: total=890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.259 job0: (groupid=0, jobs=1): err= 0: pid=459828: Wed Nov 6 08:55:27 2024 00:19:06.259 read: IOPS=12, BW=12.6MiB/s (13.2MB/s)(162MiB/12895msec) 00:19:06.259 slat (usec): min=473, max=4212.7k, avg=66612.59, stdev=405531.97 00:19:06.259 clat (msec): min=1386, max=12053, avg=9182.85, stdev=3933.13 00:19:06.259 lat (msec): min=1396, max=12065, avg=9249.46, stdev=3890.00 00:19:06.259 clat percentiles (msec): 00:19:06.259 | 1.00th=[ 1401], 5.00th=[ 1469], 10.00th=[ 1653], 20.00th=[ 2106], 00:19:06.259 | 30.00th=[10805], 40.00th=[10939], 50.00th=[11073], 60.00th=[11208], 00:19:06.259 | 70.00th=[11342], 80.00th=[11610], 90.00th=[11745], 95.00th=[11879], 00:19:06.259 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:06.259 | 99.99th=[12013] 00:19:06.259 bw ( KiB/s): min= 2048, max=47104, per=0.52%, avg=14329.40, stdev=19641.18, samples=5 00:19:06.259 iops : min= 2, max= 46, avg=13.80, stdev=19.14, samples=5 00:19:06.259 lat (msec) : 2000=19.75%, >=2000=80.25% 00:19:06.259 cpu : usr=0.00%, sys=0.70%, ctx=452, majf=0, minf=32769 00:19:06.259 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.9%, 32=19.8%, >=64=61.1% 00:19:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.259 complete : 0=0.0%, 4=97.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.8% 00:19:06.259 issued rwts: total=162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.259 job0: (groupid=0, jobs=1): err= 0: pid=459829: Wed Nov 6 08:55:27 2024 00:19:06.259 read: IOPS=26, BW=26.4MiB/s (27.7MB/s)(342MiB/12967msec) 00:19:06.259 slat (usec): min=98, max=2143.6k, avg=31735.04, stdev=203993.18 00:19:06.259 clat (msec): min=973, max=10461, avg=4596.71, stdev=4057.39 00:19:06.259 lat (msec): min=976, max=10465, avg=4628.44, stdev=4061.97 00:19:06.259 clat percentiles (msec): 00:19:06.259 | 1.00th=[ 1011], 5.00th=[ 1167], 10.00th=[ 1284], 20.00th=[ 1435], 00:19:06.259 | 30.00th=[ 1485], 40.00th=[ 1519], 50.00th=[ 1586], 60.00th=[ 1670], 00:19:06.259 | 70.00th=[ 9731], 80.00th=[10000], 90.00th=[10134], 95.00th=[10268], 00:19:06.259 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:19:06.259 | 99.99th=[10402] 00:19:06.259 bw ( KiB/s): min= 2048, max=129024, per=1.76%, avg=48908.89, stdev=48966.27, samples=9 00:19:06.259 iops : min= 2, max= 126, avg=47.67, stdev=47.77, samples=9 00:19:06.259 lat (msec) : 1000=0.88%, 2000=59.36%, >=2000=39.77% 00:19:06.259 cpu : usr=0.00%, sys=0.97%, ctx=799, majf=0, minf=32769 00:19:06.259 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.4%, >=64=81.6% 00:19:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.259 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:06.259 issued rwts: total=342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.259 job0: (groupid=0, jobs=1): err= 0: pid=459830: Wed Nov 6 08:55:27 2024 00:19:06.259 read: IOPS=73, BW=73.3MiB/s (76.9MB/s)(942MiB/12845msec) 00:19:06.259 slat (usec): min=34, max=4207.7k, avg=10613.05, stdev=153357.61 00:19:06.259 clat (msec): min=349, max=6937, avg=1663.70, stdev=2227.76 00:19:06.259 lat (msec): min=351, max=6940, avg=1674.32, stdev=2237.52 00:19:06.259 clat percentiles (msec): 00:19:06.259 | 1.00th=[ 376], 5.00th=[ 397], 10.00th=[ 422], 20.00th=[ 426], 00:19:06.260 | 30.00th=[ 430], 40.00th=[ 451], 50.00th=[ 456], 60.00th=[ 481], 00:19:06.260 | 70.00th=[ 502], 80.00th=[ 3004], 90.00th=[ 6879], 95.00th=[ 6879], 00:19:06.260 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:19:06.260 | 99.99th=[ 6946] 00:19:06.260 bw ( KiB/s): min=12288, max=308630, per=7.10%, avg=197247.50, stdev=113303.92, samples=8 00:19:06.260 iops : min= 12, max= 301, avg=192.50, stdev=110.54, samples=8 00:19:06.260 lat (msec) : 500=69.64%, 750=3.08%, >=2000=27.28% 00:19:06.260 cpu : usr=0.02%, sys=0.95%, ctx=1314, majf=0, minf=32769 00:19:06.260 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:19:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.260 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.260 issued rwts: total=942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.260 job1: (groupid=0, jobs=1): err= 0: pid=459831: Wed Nov 6 08:55:27 2024 00:19:06.260 read: IOPS=81, BW=81.7MiB/s (85.6MB/s)(1059MiB/12969msec) 00:19:06.260 slat (usec): min=63, max=2158.1k, avg=10221.26, stdev=111860.19 00:19:06.260 clat (msec): min=397, max=8973, avg=1509.87, stdev=2605.04 00:19:06.260 lat (msec): min=401, max=8974, avg=1520.09, stdev=2613.73 00:19:06.260 clat percentiles (msec): 00:19:06.260 | 1.00th=[ 401], 5.00th=[ 405], 10.00th=[ 405], 20.00th=[ 409], 00:19:06.260 | 30.00th=[ 414], 40.00th=[ 418], 50.00th=[ 514], 60.00th=[ 676], 00:19:06.260 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 8658], 95.00th=[ 8792], 00:19:06.260 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:19:06.260 | 99.99th=[ 8926] 00:19:06.260 bw ( KiB/s): min= 2052, max=319488, per=6.24%, avg=173537.82, stdev=123834.73, samples=11 00:19:06.260 iops : min= 2, max= 312, avg=169.36, stdev=120.98, samples=11 00:19:06.260 lat (msec) : 500=49.58%, 750=35.79%, 1000=1.23%, >=2000=13.41% 00:19:06.260 cpu : usr=0.05%, sys=1.57%, ctx=929, majf=0, minf=32769 00:19:06.260 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.1% 00:19:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.260 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.260 issued rwts: total=1059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.260 job1: (groupid=0, jobs=1): err= 0: pid=459832: Wed Nov 6 08:55:27 2024 00:19:06.260 read: IOPS=3, BW=3713KiB/s (3802kB/s)(47.0MiB/12961msec) 00:19:06.260 slat (usec): min=402, max=2163.3k, avg=230584.43, stdev=648968.80 00:19:06.260 clat (msec): min=2122, max=12959, avg=11762.17, stdev=2567.75 00:19:06.260 lat (msec): min=4263, max=12960, avg=11992.76, stdev=2133.14 00:19:06.260 clat percentiles (msec): 00:19:06.260 | 1.00th=[ 2123], 5.00th=[ 4329], 10.00th=[ 8557], 20.00th=[10671], 00:19:06.260 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12818], 60.00th=[12818], 00:19:06.260 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.260 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.260 | 99.99th=[12953] 00:19:06.260 lat (msec) : >=2000=100.00% 00:19:06.260 cpu : usr=0.00%, sys=0.25%, ctx=78, majf=0, minf=12033 00:19:06.260 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:19:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.260 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.260 job1: (groupid=0, jobs=1): err= 0: pid=459833: Wed Nov 6 08:55:27 2024 00:19:06.260 read: IOPS=31, BW=31.4MiB/s (32.9MB/s)(405MiB/12899msec) 00:19:06.260 slat (usec): min=46, max=4271.4k, avg=26601.40, stdev=258365.87 00:19:06.260 clat (msec): min=661, max=11241, avg=3907.46, stdev=4428.60 00:19:06.260 lat (msec): min=663, max=11242, avg=3934.06, stdev=4439.91 00:19:06.260 clat percentiles (msec): 00:19:06.260 | 1.00th=[ 667], 5.00th=[ 667], 10.00th=[ 667], 20.00th=[ 676], 00:19:06.260 | 30.00th=[ 709], 40.00th=[ 735], 50.00th=[ 776], 60.00th=[ 852], 00:19:06.260 | 70.00th=[ 8557], 80.00th=[10805], 90.00th=[10939], 95.00th=[11208], 00:19:06.260 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:19:06.260 | 99.99th=[11208] 00:19:06.260 bw ( KiB/s): min= 2052, max=192512, per=2.92%, avg=81282.29, stdev=78167.60, samples=7 00:19:06.260 iops : min= 2, max= 188, avg=79.29, stdev=76.19, samples=7 00:19:06.260 lat (msec) : 750=45.93%, 1000=14.57%, >=2000=39.51% 00:19:06.260 cpu : usr=0.02%, sys=0.85%, ctx=362, majf=0, minf=32769 00:19:06.260 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=7.9%, >=64=84.4% 00:19:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.260 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:06.260 issued rwts: total=405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.260 job1: (groupid=0, jobs=1): err= 0: pid=459834: Wed Nov 6 08:55:27 2024 00:19:06.260 read: IOPS=38, BW=38.9MiB/s (40.8MB/s)(501MiB/12867msec) 00:19:06.260 slat (usec): min=920, max=2183.6k, avg=21427.11, stdev=163769.06 00:19:06.260 clat (msec): min=799, max=9391, avg=3055.63, stdev=3417.53 00:19:06.260 lat (msec): min=804, max=9400, avg=3077.06, stdev=3425.17 00:19:06.260 clat percentiles (msec): 00:19:06.260 | 1.00th=[ 802], 5.00th=[ 810], 10.00th=[ 810], 20.00th=[ 818], 00:19:06.260 | 30.00th=[ 827], 40.00th=[ 919], 50.00th=[ 1133], 60.00th=[ 1469], 00:19:06.260 | 70.00th=[ 1586], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9194], 00:19:06.260 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:19:06.260 | 99.99th=[ 9329] 00:19:06.260 bw ( KiB/s): min= 2052, max=159744, per=3.06%, avg=85089.33, stdev=68058.44, samples=9 00:19:06.260 iops : min= 2, max= 156, avg=83.00, stdev=66.48, samples=9 00:19:06.260 lat (msec) : 1000=43.71%, 2000=29.54%, >=2000=26.75% 00:19:06.260 cpu : usr=0.05%, sys=1.31%, ctx=965, majf=0, minf=32769 00:19:06.260 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:19:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.260 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.260 issued rwts: total=501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.260 job1: (groupid=0, jobs=1): err= 0: pid=459835: Wed Nov 6 08:55:27 2024 00:19:06.260 read: IOPS=41, BW=41.8MiB/s (43.8MB/s)(538MiB/12871msec) 00:19:06.260 slat (usec): min=488, max=2183.6k, avg=19960.13, stdev=158209.00 00:19:06.260 clat (msec): min=810, max=9388, avg=2904.51, stdev=3336.85 00:19:06.260 lat (msec): min=815, max=9392, avg=2924.47, stdev=3344.91 00:19:06.260 clat percentiles (msec): 00:19:06.260 | 1.00th=[ 818], 5.00th=[ 818], 10.00th=[ 827], 20.00th=[ 827], 00:19:06.260 | 30.00th=[ 835], 40.00th=[ 969], 50.00th=[ 1167], 60.00th=[ 1267], 00:19:06.260 | 70.00th=[ 1435], 80.00th=[ 8658], 90.00th=[ 9060], 95.00th=[ 9194], 00:19:06.260 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:19:06.260 | 99.99th=[ 9329] 00:19:06.260 bw ( KiB/s): min= 2052, max=161792, per=3.03%, avg=84154.30, stdev=65328.91, samples=10 00:19:06.260 iops : min= 2, max= 158, avg=82.10, stdev=63.79, samples=10 00:19:06.260 lat (msec) : 1000=41.08%, 2000=34.01%, >=2000=24.91% 00:19:06.260 cpu : usr=0.02%, sys=1.36%, ctx=949, majf=0, minf=32769 00:19:06.260 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:19:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.260 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:06.260 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.260 job1: (groupid=0, jobs=1): err= 0: pid=459836: Wed Nov 6 08:55:27 2024 00:19:06.260 read: IOPS=7, BW=7428KiB/s (7607kB/s)(94.0MiB/12958msec) 00:19:06.260 slat (usec): min=597, max=2150.5k, avg=115260.11, stdev=440801.99 00:19:06.260 clat (msec): min=2122, max=12943, avg=11844.21, stdev=1940.64 00:19:06.260 lat (msec): min=4251, max=12957, avg=11959.47, stdev=1658.24 00:19:06.260 clat percentiles (msec): 00:19:06.260 | 1.00th=[ 2123], 5.00th=[ 6409], 10.00th=[11879], 20.00th=[12013], 00:19:06.260 | 30.00th=[12147], 40.00th=[12281], 50.00th=[12281], 60.00th=[12416], 00:19:06.260 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:19:06.260 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.260 | 99.99th=[12953] 00:19:06.260 lat (msec) : >=2000=100.00% 00:19:06.260 cpu : usr=0.00%, sys=0.58%, ctx=208, majf=0, minf=24065 00:19:06.260 IO depths : 1=1.1%, 2=2.1%, 4=4.3%, 8=8.5%, 16=17.0%, 32=34.0%, >=64=33.0% 00:19:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.260 issued rwts: total=94,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.260 job1: (groupid=0, jobs=1): err= 0: pid=459837: Wed Nov 6 08:55:27 2024 00:19:06.260 read: IOPS=3, BW=3082KiB/s (3156kB/s)(39.0MiB/12956msec) 00:19:06.260 slat (usec): min=762, max=2102.1k, avg=278144.07, stdev=701225.10 00:19:06.260 clat (msec): min=2107, max=12953, avg=10838.17, stdev=3222.35 00:19:06.260 lat (msec): min=4209, max=12955, avg=11116.31, stdev=2901.06 00:19:06.260 clat percentiles (msec): 00:19:06.260 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 8490], 00:19:06.260 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:19:06.260 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.260 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.260 | 99.99th=[12953] 00:19:06.260 lat (msec) : >=2000=100.00% 00:19:06.260 cpu : usr=0.00%, sys=0.27%, ctx=75, majf=0, minf=9985 00:19:06.260 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:19:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.260 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.260 job1: (groupid=0, jobs=1): err= 0: pid=459838: Wed Nov 6 08:55:27 2024 00:19:06.260 read: IOPS=6, BW=6473KiB/s (6628kB/s)(82.0MiB/12973msec) 00:19:06.260 slat (usec): min=772, max=2087.1k, avg=132378.09, stdev=496395.96 00:19:06.260 clat (msec): min=2116, max=12971, avg=10727.02, stdev=3326.74 00:19:06.260 lat (msec): min=4188, max=12972, avg=10859.40, stdev=3193.19 00:19:06.260 clat percentiles (msec): 00:19:06.260 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:19:06.261 | 30.00th=[ 8557], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:19:06.261 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.261 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.261 | 99.99th=[12953] 00:19:06.261 lat (msec) : >=2000=100.00% 00:19:06.261 cpu : usr=0.00%, sys=0.56%, ctx=92, majf=0, minf=20993 00:19:06.261 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=9.8%, 16=19.5%, 32=39.0%, >=64=23.2% 00:19:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.261 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.261 job1: (groupid=0, jobs=1): err= 0: pid=459839: Wed Nov 6 08:55:27 2024 00:19:06.261 read: IOPS=1, BW=1350KiB/s (1382kB/s)(17.0MiB/12898msec) 00:19:06.261 slat (usec): min=1001, max=4218.4k, avg=633208.46, stdev=1235403.99 00:19:06.261 clat (msec): min=2133, max=12888, avg=10845.26, stdev=3677.20 00:19:06.261 lat (msec): min=4263, max=12897, avg=11478.47, stdev=2935.17 00:19:06.261 clat percentiles (msec): 00:19:06.261 | 1.00th=[ 2140], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 8557], 00:19:06.261 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:19:06.261 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:19:06.261 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.261 | 99.99th=[12953] 00:19:06.261 lat (msec) : >=2000=100.00% 00:19:06.261 cpu : usr=0.00%, sys=0.11%, ctx=42, majf=0, minf=4353 00:19:06.261 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:19:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.261 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.261 job1: (groupid=0, jobs=1): err= 0: pid=459840: Wed Nov 6 08:55:27 2024 00:19:06.261 read: IOPS=90, BW=90.6MiB/s (95.0MB/s)(1164MiB/12849msec) 00:19:06.261 slat (usec): min=35, max=2140.6k, avg=9223.00, stdev=121222.45 00:19:06.261 clat (msec): min=227, max=10644, avg=737.07, stdev=1378.47 00:19:06.261 lat (msec): min=231, max=10688, avg=746.29, stdev=1408.53 00:19:06.261 clat percentiles (msec): 00:19:06.261 | 1.00th=[ 234], 5.00th=[ 241], 10.00th=[ 241], 20.00th=[ 241], 00:19:06.261 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 243], 60.00th=[ 245], 00:19:06.261 | 70.00th=[ 245], 80.00th=[ 259], 90.00th=[ 4111], 95.00th=[ 4178], 00:19:06.261 | 99.00th=[ 6409], 99.50th=[ 6678], 99.90th=[ 8490], 99.95th=[10671], 00:19:06.261 | 99.99th=[10671] 00:19:06.261 bw ( KiB/s): min= 2052, max=528384, per=12.73%, avg=353963.33, stdev=211307.88, samples=6 00:19:06.261 iops : min= 2, max= 516, avg=345.67, stdev=206.36, samples=6 00:19:06.261 lat (msec) : 250=78.01%, 500=10.22%, >=2000=11.77% 00:19:06.261 cpu : usr=0.05%, sys=1.20%, ctx=1051, majf=0, minf=32769 00:19:06.261 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:19:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.261 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.261 issued rwts: total=1164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.261 job1: (groupid=0, jobs=1): err= 0: pid=459841: Wed Nov 6 08:55:27 2024 00:19:06.261 read: IOPS=33, BW=33.9MiB/s (35.5MB/s)(438MiB/12934msec) 00:19:06.261 slat (usec): min=46, max=2094.7k, avg=24654.12, stdev=202346.38 00:19:06.261 clat (msec): min=339, max=12924, avg=3139.02, stdev=4211.65 00:19:06.261 lat (msec): min=341, max=12925, avg=3163.68, stdev=4232.28 00:19:06.261 clat percentiles (msec): 00:19:06.261 | 1.00th=[ 342], 5.00th=[ 342], 10.00th=[ 342], 20.00th=[ 347], 00:19:06.261 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 368], 60.00th=[ 372], 00:19:06.261 | 70.00th=[ 4245], 80.00th=[ 9463], 90.00th=[ 9597], 95.00th=[ 9731], 00:19:06.261 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.261 | 99.99th=[12953] 00:19:06.261 bw ( KiB/s): min= 2052, max=315392, per=2.86%, avg=79616.50, stdev=134565.39, samples=8 00:19:06.261 iops : min= 2, max= 308, avg=77.75, stdev=131.41, samples=8 00:19:06.261 lat (msec) : 500=65.98%, 2000=1.14%, >=2000=32.88% 00:19:06.261 cpu : usr=0.02%, sys=0.81%, ctx=359, majf=0, minf=32769 00:19:06.261 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.3%, >=64=85.6% 00:19:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.261 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.261 issued rwts: total=438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.261 job1: (groupid=0, jobs=1): err= 0: pid=459842: Wed Nov 6 08:55:27 2024 00:19:06.261 read: IOPS=1, BW=1515KiB/s (1551kB/s)(19.0MiB/12843msec) 00:19:06.261 slat (usec): min=1073, max=2124.3k, avg=565488.77, stdev=934102.66 00:19:06.261 clat (msec): min=2098, max=12838, avg=8187.34, stdev=3730.53 00:19:06.261 lat (msec): min=4199, max=12842, avg=8752.83, stdev=3567.04 00:19:06.261 clat percentiles (msec): 00:19:06.261 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4212], 20.00th=[ 4245], 00:19:06.261 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10671], 00:19:06.261 | 70.00th=[10671], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:06.261 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:06.261 | 99.99th=[12818] 00:19:06.261 lat (msec) : >=2000=100.00% 00:19:06.261 cpu : usr=0.00%, sys=0.12%, ctx=45, majf=0, minf=4865 00:19:06.261 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:19:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.261 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.261 job1: (groupid=0, jobs=1): err= 0: pid=459843: Wed Nov 6 08:55:27 2024 00:19:06.261 read: IOPS=4, BW=4938KiB/s (5056kB/s)(62.0MiB/12858msec) 00:19:06.261 slat (usec): min=590, max=2117.2k, avg=173226.66, stdev=567956.22 00:19:06.261 clat (msec): min=2116, max=12852, avg=8984.72, stdev=3261.90 00:19:06.261 lat (msec): min=4186, max=12857, avg=9157.95, stdev=3175.22 00:19:06.261 clat percentiles (msec): 00:19:06.261 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:06.261 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:19:06.261 | 70.00th=[10671], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:06.261 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:06.261 | 99.99th=[12818] 00:19:06.261 lat (msec) : >=2000=100.00% 00:19:06.261 cpu : usr=0.01%, sys=0.37%, ctx=56, majf=0, minf=15873 00:19:06.261 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:19:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.261 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.261 job2: (groupid=0, jobs=1): err= 0: pid=459844: Wed Nov 6 08:55:27 2024 00:19:06.261 read: IOPS=7, BW=8061KiB/s (8254kB/s)(102MiB/12958msec) 00:19:06.261 slat (usec): min=750, max=2071.2k, avg=106283.31, stdev=445359.57 00:19:06.261 clat (msec): min=2115, max=12956, avg=10835.65, stdev=3154.77 00:19:06.261 lat (msec): min=4170, max=12957, avg=10941.93, stdev=3038.57 00:19:06.261 clat percentiles (msec): 00:19:06.261 | 1.00th=[ 4178], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 8490], 00:19:06.261 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:19:06.261 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.261 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.261 | 99.99th=[12953] 00:19:06.261 lat (msec) : >=2000=100.00% 00:19:06.261 cpu : usr=0.00%, sys=0.63%, ctx=99, majf=0, minf=26113 00:19:06.261 IO depths : 1=1.0%, 2=2.0%, 4=3.9%, 8=7.8%, 16=15.7%, 32=31.4%, >=64=38.2% 00:19:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.261 issued rwts: total=102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.261 job2: (groupid=0, jobs=1): err= 0: pid=459845: Wed Nov 6 08:55:27 2024 00:19:06.261 read: IOPS=2, BW=2776KiB/s (2843kB/s)(35.0MiB/12909msec) 00:19:06.261 slat (usec): min=889, max=2076.5k, avg=308396.95, stdev=724952.19 00:19:06.261 clat (msec): min=2114, max=12903, avg=9197.64, stdev=3650.40 00:19:06.261 lat (msec): min=4171, max=12908, avg=9506.04, stdev=3486.66 00:19:06.261 clat percentiles (msec): 00:19:06.261 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4245], 00:19:06.261 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:19:06.261 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:19:06.261 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.261 | 99.99th=[12953] 00:19:06.261 lat (msec) : >=2000=100.00% 00:19:06.261 cpu : usr=0.00%, sys=0.22%, ctx=71, majf=0, minf=8961 00:19:06.261 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:19:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.261 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.261 job2: (groupid=0, jobs=1): err= 0: pid=459846: Wed Nov 6 08:55:27 2024 00:19:06.261 read: IOPS=5, BW=5908KiB/s (6049kB/s)(74.0MiB/12827msec) 00:19:06.261 slat (usec): min=741, max=2059.2k, avg=144697.32, stdev=514320.87 00:19:06.261 clat (msec): min=2119, max=12826, avg=9259.37, stdev=3246.97 00:19:06.261 lat (msec): min=4175, max=12826, avg=9404.07, stdev=3161.89 00:19:06.261 clat percentiles (msec): 00:19:06.261 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:06.261 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:19:06.261 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:06.261 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:06.261 | 99.99th=[12818] 00:19:06.262 lat (msec) : >=2000=100.00% 00:19:06.262 cpu : usr=0.00%, sys=0.45%, ctx=61, majf=0, minf=18945 00:19:06.262 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:19:06.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.262 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.262 job2: (groupid=0, jobs=1): err= 0: pid=459847: Wed Nov 6 08:55:27 2024 00:19:06.262 read: IOPS=0, BW=798KiB/s (817kB/s)(10.0MiB/12837msec) 00:19:06.262 slat (msec): min=9, max=4209, avg=1072.73, stdev=1464.24 00:19:06.262 clat (msec): min=2108, max=12820, avg=8924.60, stdev=4120.93 00:19:06.262 lat (msec): min=4205, max=12836, avg=9997.33, stdev=3498.87 00:19:06.262 clat percentiles (msec): 00:19:06.262 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 2106], 20.00th=[ 4212], 00:19:06.262 | 30.00th=[ 4245], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:19:06.262 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:19:06.262 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:06.262 | 99.99th=[12818] 00:19:06.262 lat (msec) : >=2000=100.00% 00:19:06.262 cpu : usr=0.01%, sys=0.05%, ctx=54, majf=0, minf=2561 00:19:06.262 IO depths : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.262 issued rwts: total=10,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.262 job2: (groupid=0, jobs=1): err= 0: pid=459848: Wed Nov 6 08:55:27 2024 00:19:06.262 read: IOPS=56, BW=56.4MiB/s (59.1MB/s)(606MiB/10745msec) 00:19:06.262 slat (usec): min=53, max=2114.7k, avg=16495.67, stdev=144746.07 00:19:06.262 clat (msec): min=503, max=9296, avg=1046.76, stdev=1316.30 00:19:06.262 lat (msec): min=505, max=9352, avg=1063.25, stdev=1359.90 00:19:06.262 clat percentiles (msec): 00:19:06.262 | 1.00th=[ 506], 5.00th=[ 514], 10.00th=[ 518], 20.00th=[ 531], 00:19:06.262 | 30.00th=[ 575], 40.00th=[ 718], 50.00th=[ 776], 60.00th=[ 844], 00:19:06.262 | 70.00th=[ 927], 80.00th=[ 1028], 90.00th=[ 1150], 95.00th=[ 1385], 00:19:06.262 | 99.00th=[ 7416], 99.50th=[ 7416], 99.90th=[ 9329], 99.95th=[ 9329], 00:19:06.262 | 99.99th=[ 9329] 00:19:06.262 bw ( KiB/s): min=28672, max=243712, per=5.56%, avg=154434.00, stdev=80312.95, samples=6 00:19:06.262 iops : min= 28, max= 238, avg=150.67, stdev=78.43, samples=6 00:19:06.262 lat (msec) : 750=44.72%, 1000=33.50%, 2000=17.49%, >=2000=4.29% 00:19:06.262 cpu : usr=0.03%, sys=1.12%, ctx=1183, majf=0, minf=32769 00:19:06.262 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:19:06.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.262 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:06.262 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.262 job2: (groupid=0, jobs=1): err= 0: pid=459849: Wed Nov 6 08:55:27 2024 00:19:06.262 read: IOPS=24, BW=24.3MiB/s (25.5MB/s)(313MiB/12894msec) 00:19:06.262 slat (usec): min=59, max=2115.9k, avg=34364.07, stdev=232698.83 00:19:06.262 clat (msec): min=587, max=10690, avg=3720.86, stdev=3229.65 00:19:06.262 lat (msec): min=589, max=10707, avg=3755.23, stdev=3246.17 00:19:06.262 clat percentiles (msec): 00:19:06.262 | 1.00th=[ 592], 5.00th=[ 609], 10.00th=[ 625], 20.00th=[ 634], 00:19:06.262 | 30.00th=[ 642], 40.00th=[ 651], 50.00th=[ 1351], 60.00th=[ 4799], 00:19:06.262 | 70.00th=[ 7282], 80.00th=[ 7416], 90.00th=[ 7550], 95.00th=[ 7684], 00:19:06.262 | 99.00th=[ 7752], 99.50th=[ 8658], 99.90th=[10671], 99.95th=[10671], 00:19:06.262 | 99.99th=[10671] 00:19:06.262 bw ( KiB/s): min= 2048, max=194560, per=1.96%, avg=54449.86, stdev=74460.24, samples=7 00:19:06.262 iops : min= 2, max= 190, avg=53.14, stdev=72.69, samples=7 00:19:06.262 lat (msec) : 750=49.20%, 2000=0.96%, >=2000=49.84% 00:19:06.262 cpu : usr=0.01%, sys=0.79%, ctx=295, majf=0, minf=32769 00:19:06.262 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.2%, >=64=79.9% 00:19:06.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.262 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:06.262 issued rwts: total=313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.262 job2: (groupid=0, jobs=1): err= 0: pid=459850: Wed Nov 6 08:55:27 2024 00:19:06.262 read: IOPS=5, BW=5654KiB/s (5790kB/s)(71.0MiB/12859msec) 00:19:06.262 slat (usec): min=478, max=2069.4k, avg=151284.63, stdev=526895.46 00:19:06.262 clat (msec): min=2117, max=12857, avg=9272.16, stdev=3415.14 00:19:06.262 lat (msec): min=4180, max=12858, avg=9423.45, stdev=3330.53 00:19:06.262 clat percentiles (msec): 00:19:06.262 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:06.262 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[12684], 00:19:06.262 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:06.262 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:06.262 | 99.99th=[12818] 00:19:06.262 lat (msec) : >=2000=100.00% 00:19:06.262 cpu : usr=0.01%, sys=0.45%, ctx=66, majf=0, minf=18177 00:19:06.262 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:19:06.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.262 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.262 job2: (groupid=0, jobs=1): err= 0: pid=459851: Wed Nov 6 08:55:27 2024 00:19:06.262 read: IOPS=1, BW=1579KiB/s (1617kB/s)(20.0MiB/12967msec) 00:19:06.262 slat (usec): min=1096, max=4227.2k, avg=542336.80, stdev=1156497.16 00:19:06.262 clat (msec): min=2119, max=12965, avg=10755.88, stdev=3646.70 00:19:06.262 lat (msec): min=4253, max=12966, avg=11298.22, stdev=3052.96 00:19:06.262 clat percentiles (msec): 00:19:06.262 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 6409], 00:19:06.262 | 30.00th=[ 8557], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:19:06.262 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.262 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.262 | 99.99th=[12953] 00:19:06.262 lat (msec) : >=2000=100.00% 00:19:06.262 cpu : usr=0.00%, sys=0.14%, ctx=51, majf=0, minf=5121 00:19:06.262 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:19:06.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.262 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.262 job2: (groupid=0, jobs=1): err= 0: pid=459852: Wed Nov 6 08:55:27 2024 00:19:06.262 read: IOPS=129, BW=130MiB/s (136MB/s)(1406MiB/10827msec) 00:19:06.262 slat (usec): min=47, max=2082.3k, avg=7106.33, stdev=55924.80 00:19:06.262 clat (msec): min=476, max=4319, avg=943.03, stdev=699.83 00:19:06.262 lat (msec): min=481, max=6401, avg=950.14, stdev=709.48 00:19:06.262 clat percentiles (msec): 00:19:06.262 | 1.00th=[ 481], 5.00th=[ 485], 10.00th=[ 493], 20.00th=[ 542], 00:19:06.262 | 30.00th=[ 617], 40.00th=[ 659], 50.00th=[ 751], 60.00th=[ 776], 00:19:06.262 | 70.00th=[ 827], 80.00th=[ 927], 90.00th=[ 1485], 95.00th=[ 3037], 00:19:06.262 | 99.00th=[ 3205], 99.50th=[ 3205], 99.90th=[ 3205], 99.95th=[ 4329], 00:19:06.262 | 99.99th=[ 4329] 00:19:06.262 bw ( KiB/s): min=100352, max=266240, per=6.28%, avg=174593.13, stdev=53021.66, samples=15 00:19:06.262 iops : min= 98, max= 260, avg=170.47, stdev=51.73, samples=15 00:19:06.262 lat (msec) : 500=12.02%, 750=39.62%, 1000=30.37%, 2000=8.89%, >=2000=9.10% 00:19:06.262 cpu : usr=0.09%, sys=2.24%, ctx=1839, majf=0, minf=32769 00:19:06.262 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:19:06.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.262 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.262 issued rwts: total=1406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.262 job2: (groupid=0, jobs=1): err= 0: pid=459853: Wed Nov 6 08:55:27 2024 00:19:06.262 read: IOPS=3, BW=3308KiB/s (3387kB/s)(42.0MiB/13002msec) 00:19:06.262 slat (usec): min=642, max=2173.5k, avg=259092.00, stdev=696848.79 00:19:06.262 clat (msec): min=2119, max=12999, avg=12070.34, stdev=2450.20 00:19:06.262 lat (msec): min=4293, max=13001, avg=12329.43, stdev=1881.72 00:19:06.262 clat percentiles (msec): 00:19:06.262 | 1.00th=[ 2123], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[12818], 00:19:06.262 | 30.00th=[12953], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:19:06.262 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.262 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.262 | 99.99th=[12953] 00:19:06.262 lat (msec) : >=2000=100.00% 00:19:06.262 cpu : usr=0.00%, sys=0.33%, ctx=70, majf=0, minf=10753 00:19:06.262 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:19:06.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.262 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.262 job2: (groupid=0, jobs=1): err= 0: pid=459854: Wed Nov 6 08:55:27 2024 00:19:06.262 read: IOPS=3, BW=3248KiB/s (3326kB/s)(41.0MiB/12927msec) 00:19:06.262 slat (usec): min=745, max=2095.3k, avg=263318.05, stdev=686542.60 00:19:06.262 clat (msec): min=2130, max=12923, avg=9152.91, stdev=3611.01 00:19:06.262 lat (msec): min=4225, max=12926, avg=9416.23, stdev=3477.24 00:19:06.262 clat percentiles (msec): 00:19:06.262 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 4329], 00:19:06.262 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:19:06.262 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.262 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.262 | 99.99th=[12953] 00:19:06.262 lat (msec) : >=2000=100.00% 00:19:06.262 cpu : usr=0.02%, sys=0.26%, ctx=39, majf=0, minf=10497 00:19:06.262 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.263 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.263 job2: (groupid=0, jobs=1): err= 0: pid=459855: Wed Nov 6 08:55:27 2024 00:19:06.263 read: IOPS=34, BW=34.7MiB/s (36.4MB/s)(448MiB/12905msec) 00:19:06.263 slat (usec): min=41, max=2142.5k, avg=24090.34, stdev=195969.23 00:19:06.263 clat (msec): min=523, max=11102, avg=3530.01, stdev=4303.91 00:19:06.263 lat (msec): min=527, max=11129, avg=3554.10, stdev=4315.89 00:19:06.263 clat percentiles (msec): 00:19:06.263 | 1.00th=[ 527], 5.00th=[ 535], 10.00th=[ 558], 20.00th=[ 609], 00:19:06.263 | 30.00th=[ 651], 40.00th=[ 667], 50.00th=[ 676], 60.00th=[ 718], 00:19:06.263 | 70.00th=[ 4799], 80.00th=[10805], 90.00th=[10939], 95.00th=[11073], 00:19:06.263 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:19:06.263 | 99.99th=[11073] 00:19:06.263 bw ( KiB/s): min= 2052, max=226874, per=2.63%, avg=72995.33, stdev=86467.41, samples=9 00:19:06.263 iops : min= 2, max= 221, avg=71.22, stdev=84.32, samples=9 00:19:06.263 lat (msec) : 750=60.71%, 1000=4.69%, >=2000=34.60% 00:19:06.263 cpu : usr=0.02%, sys=0.99%, ctx=400, majf=0, minf=32769 00:19:06.263 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.1%, >=64=85.9% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.263 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.263 job2: (groupid=0, jobs=1): err= 0: pid=459856: Wed Nov 6 08:55:27 2024 00:19:06.263 read: IOPS=1, BW=1897KiB/s (1942kB/s)(24.0MiB/12956msec) 00:19:06.263 slat (usec): min=1094, max=2135.7k, avg=451048.98, stdev=856235.32 00:19:06.263 clat (msec): min=2130, max=12954, avg=9546.91, stdev=3631.11 00:19:06.263 lat (msec): min=4265, max=12955, avg=9997.95, stdev=3329.57 00:19:06.263 clat percentiles (msec): 00:19:06.263 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:06.263 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12818], 00:19:06.263 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.263 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.263 | 99.99th=[12953] 00:19:06.263 lat (msec) : >=2000=100.00% 00:19:06.263 cpu : usr=0.00%, sys=0.15%, ctx=64, majf=0, minf=6145 00:19:06.263 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.263 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.263 job3: (groupid=0, jobs=1): err= 0: pid=459857: Wed Nov 6 08:55:27 2024 00:19:06.263 read: IOPS=33, BW=33.4MiB/s (35.0MB/s)(432MiB/12924msec) 00:19:06.263 slat (usec): min=40, max=2158.9k, avg=25032.25, stdev=189131.73 00:19:06.263 clat (msec): min=624, max=7212, avg=2822.25, stdev=2299.57 00:19:06.263 lat (msec): min=633, max=7215, avg=2847.28, stdev=2306.57 00:19:06.263 clat percentiles (msec): 00:19:06.263 | 1.00th=[ 634], 5.00th=[ 642], 10.00th=[ 642], 20.00th=[ 651], 00:19:06.263 | 30.00th=[ 709], 40.00th=[ 1401], 50.00th=[ 1418], 60.00th=[ 2140], 00:19:06.263 | 70.00th=[ 4933], 80.00th=[ 5940], 90.00th=[ 6141], 95.00th=[ 6275], 00:19:06.263 | 99.00th=[ 6342], 99.50th=[ 6342], 99.90th=[ 7215], 99.95th=[ 7215], 00:19:06.263 | 99.99th=[ 7215] 00:19:06.263 bw ( KiB/s): min= 2052, max=219136, per=3.21%, avg=89234.86, stdev=91709.66, samples=7 00:19:06.263 iops : min= 2, max= 214, avg=87.14, stdev=89.56, samples=7 00:19:06.263 lat (msec) : 750=30.09%, 2000=29.17%, >=2000=40.74% 00:19:06.263 cpu : usr=0.00%, sys=0.78%, ctx=411, majf=0, minf=32769 00:19:06.263 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.4% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.263 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.263 job3: (groupid=0, jobs=1): err= 0: pid=459858: Wed Nov 6 08:55:27 2024 00:19:06.263 read: IOPS=2, BW=2399KiB/s (2456kB/s)(30.0MiB/12807msec) 00:19:06.263 slat (usec): min=556, max=2120.6k, avg=356619.45, stdev=784026.36 00:19:06.263 clat (msec): min=2107, max=12805, avg=10782.14, stdev=3216.26 00:19:06.263 lat (msec): min=4199, max=12806, avg=11138.76, stdev=2785.56 00:19:06.263 clat percentiles (msec): 00:19:06.263 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:19:06.263 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:19:06.263 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:06.263 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:06.263 | 99.99th=[12818] 00:19:06.263 lat (msec) : >=2000=100.00% 00:19:06.263 cpu : usr=0.00%, sys=0.18%, ctx=49, majf=0, minf=7681 00:19:06.263 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.263 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.263 job3: (groupid=0, jobs=1): err= 0: pid=459859: Wed Nov 6 08:55:27 2024 00:19:06.263 read: IOPS=5, BW=5831KiB/s (5971kB/s)(73.0MiB/12819msec) 00:19:06.263 slat (usec): min=721, max=2085.1k, avg=146598.09, stdev=517029.04 00:19:06.263 clat (msec): min=2116, max=12815, avg=9595.59, stdev=2801.36 00:19:06.263 lat (msec): min=4168, max=12818, avg=9742.19, stdev=2682.02 00:19:06.263 clat percentiles (msec): 00:19:06.263 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8423], 00:19:06.263 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:19:06.263 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:06.263 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:06.263 | 99.99th=[12818] 00:19:06.263 lat (msec) : >=2000=100.00% 00:19:06.263 cpu : usr=0.00%, sys=0.44%, ctx=71, majf=0, minf=18689 00:19:06.263 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.263 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.263 job3: (groupid=0, jobs=1): err= 0: pid=459860: Wed Nov 6 08:55:27 2024 00:19:06.263 read: IOPS=5, BW=5706KiB/s (5843kB/s)(72.0MiB/12921msec) 00:19:06.263 slat (usec): min=574, max=2057.9k, avg=149995.63, stdev=513833.88 00:19:06.263 clat (msec): min=2119, max=12919, avg=10096.30, stdev=3011.49 00:19:06.263 lat (msec): min=4177, max=12919, avg=10246.29, stdev=2874.45 00:19:06.263 clat percentiles (msec): 00:19:06.263 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6409], 00:19:06.263 | 30.00th=[ 8557], 40.00th=[10537], 50.00th=[10537], 60.00th=[12684], 00:19:06.263 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.263 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.263 | 99.99th=[12953] 00:19:06.263 lat (msec) : >=2000=100.00% 00:19:06.263 cpu : usr=0.00%, sys=0.45%, ctx=132, majf=0, minf=18433 00:19:06.263 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:19:06.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.263 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.263 job3: (groupid=0, jobs=1): err= 0: pid=459861: Wed Nov 6 08:55:27 2024 00:19:06.263 read: IOPS=5, BW=5845KiB/s (5985kB/s)(74.0MiB/12964msec) 00:19:06.263 slat (usec): min=749, max=2162.2k, avg=146692.46, stdev=526004.17 00:19:06.263 clat (msec): min=2107, max=12962, avg=11929.41, stdev=2240.04 00:19:06.263 lat (msec): min=4270, max=12963, avg=12076.10, stdev=1920.73 00:19:06.263 clat percentiles (msec): 00:19:06.263 | 1.00th=[ 2106], 5.00th=[ 6342], 10.00th=[ 8490], 20.00th=[12684], 00:19:06.263 | 30.00th=[12684], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:19:06.263 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.264 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.264 | 99.99th=[12953] 00:19:06.264 lat (msec) : >=2000=100.00% 00:19:06.264 cpu : usr=0.00%, sys=0.47%, ctx=92, majf=0, minf=18945 00:19:06.264 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:19:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.264 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 job3: (groupid=0, jobs=1): err= 0: pid=459862: Wed Nov 6 08:55:27 2024 00:19:06.264 read: IOPS=206, BW=206MiB/s (216MB/s)(2069MiB/10032msec) 00:19:06.264 slat (usec): min=37, max=2084.2k, avg=4833.38, stdev=46468.76 00:19:06.264 clat (msec): min=21, max=6726, avg=592.32, stdev=771.97 00:19:06.264 lat (msec): min=34, max=6815, avg=597.15, stdev=780.45 00:19:06.264 clat percentiles (msec): 00:19:06.264 | 1.00th=[ 82], 5.00th=[ 199], 10.00th=[ 199], 20.00th=[ 245], 00:19:06.264 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 363], 60.00th=[ 439], 00:19:06.264 | 70.00th=[ 456], 80.00th=[ 651], 90.00th=[ 902], 95.00th=[ 3306], 00:19:06.264 | 99.00th=[ 3473], 99.50th=[ 3473], 99.90th=[ 4665], 99.95th=[ 4665], 00:19:06.264 | 99.99th=[ 6745] 00:19:06.264 bw ( KiB/s): min=30781, max=620544, per=8.94%, avg=248650.94, stdev=179233.59, samples=16 00:19:06.264 iops : min= 30, max= 606, avg=242.63, stdev=174.93, samples=16 00:19:06.264 lat (msec) : 50=0.63%, 100=0.72%, 250=31.08%, 500=39.00%, 750=14.89% 00:19:06.264 lat (msec) : 1000=4.98%, 2000=2.42%, >=2000=6.28% 00:19:06.264 cpu : usr=0.06%, sys=2.61%, ctx=2116, majf=0, minf=32769 00:19:06.264 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:19:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.264 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.264 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 job3: (groupid=0, jobs=1): err= 0: pid=459863: Wed Nov 6 08:55:27 2024 00:19:06.264 read: IOPS=4, BW=4592KiB/s (4702kB/s)(58.0MiB/12933msec) 00:19:06.264 slat (usec): min=730, max=2103.4k, avg=186559.22, stdev=585072.31 00:19:06.264 clat (msec): min=2112, max=12930, avg=11500.47, stdev=2621.39 00:19:06.264 lat (msec): min=4215, max=12932, avg=11687.03, stdev=2307.79 00:19:06.264 clat percentiles (msec): 00:19:06.264 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[10671], 00:19:06.264 | 30.00th=[12684], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:19:06.264 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.264 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.264 | 99.99th=[12953] 00:19:06.264 lat (msec) : >=2000=100.00% 00:19:06.264 cpu : usr=0.00%, sys=0.39%, ctx=98, majf=0, minf=14849 00:19:06.264 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:19:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.264 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 job3: (groupid=0, jobs=1): err= 0: pid=459864: Wed Nov 6 08:55:27 2024 00:19:06.264 read: IOPS=2, BW=2074KiB/s (2123kB/s)(26.0MiB/12839msec) 00:19:06.264 slat (usec): min=732, max=2125.2k, avg=412972.67, stdev=831045.97 00:19:06.264 clat (msec): min=2101, max=12837, avg=8018.40, stdev=2590.09 00:19:06.264 lat (msec): min=4199, max=12838, avg=8431.38, stdev=2461.71 00:19:06.264 clat percentiles (msec): 00:19:06.264 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6342], 00:19:06.264 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8490], 00:19:06.264 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[12684], 95.00th=[12818], 00:19:06.264 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:06.264 | 99.99th=[12818] 00:19:06.264 lat (msec) : >=2000=100.00% 00:19:06.264 cpu : usr=0.00%, sys=0.17%, ctx=46, majf=0, minf=6657 00:19:06.264 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:19:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.264 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 job3: (groupid=0, jobs=1): err= 0: pid=459865: Wed Nov 6 08:55:27 2024 00:19:06.264 read: IOPS=2, BW=2546KiB/s (2607kB/s)(32.0MiB/12870msec) 00:19:06.264 slat (usec): min=508, max=2090.3k, avg=336063.10, stdev=757665.26 00:19:06.264 clat (msec): min=2114, max=12867, avg=9750.92, stdev=3453.29 00:19:06.264 lat (msec): min=4205, max=12868, avg=10086.98, stdev=3200.21 00:19:06.264 clat percentiles (msec): 00:19:06.264 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:06.264 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12818], 00:19:06.264 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:06.264 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:06.264 | 99.99th=[12818] 00:19:06.264 lat (msec) : >=2000=100.00% 00:19:06.264 cpu : usr=0.00%, sys=0.19%, ctx=51, majf=0, minf=8193 00:19:06.264 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:19:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.264 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 job3: (groupid=0, jobs=1): err= 0: pid=459866: Wed Nov 6 08:55:27 2024 00:19:06.264 read: IOPS=62, BW=62.6MiB/s (65.7MB/s)(804MiB/12834msec) 00:19:06.264 slat (usec): min=41, max=2098.0k, avg=13342.61, stdev=107947.38 00:19:06.264 clat (msec): min=635, max=8522, avg=1967.44, stdev=2465.63 00:19:06.264 lat (msec): min=636, max=8526, avg=1980.78, stdev=2473.62 00:19:06.264 clat percentiles (msec): 00:19:06.264 | 1.00th=[ 634], 5.00th=[ 651], 10.00th=[ 701], 20.00th=[ 768], 00:19:06.264 | 30.00th=[ 810], 40.00th=[ 869], 50.00th=[ 927], 60.00th=[ 978], 00:19:06.264 | 70.00th=[ 1036], 80.00th=[ 1234], 90.00th=[ 7550], 95.00th=[ 7953], 00:19:06.264 | 99.00th=[ 8221], 99.50th=[ 8288], 99.90th=[ 8490], 99.95th=[ 8490], 00:19:06.264 | 99.99th=[ 8490] 00:19:06.264 bw ( KiB/s): min= 2048, max=163840, per=3.32%, avg=92399.20, stdev=62419.35, samples=15 00:19:06.264 iops : min= 2, max= 160, avg=90.13, stdev=60.90, samples=15 00:19:06.264 lat (msec) : 750=14.68%, 1000=50.37%, 2000=18.53%, >=2000=16.42% 00:19:06.264 cpu : usr=0.02%, sys=1.31%, ctx=953, majf=0, minf=32769 00:19:06.264 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:19:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.264 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.264 issued rwts: total=804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 job3: (groupid=0, jobs=1): err= 0: pid=459867: Wed Nov 6 08:55:27 2024 00:19:06.264 read: IOPS=4, BW=4737KiB/s (4850kB/s)(60.0MiB/12971msec) 00:19:06.264 slat (usec): min=747, max=2093.3k, avg=180983.63, stdev=578020.83 00:19:06.264 clat (msec): min=2111, max=12968, avg=11234.53, stdev=3050.86 00:19:06.264 lat (msec): min=4192, max=12970, avg=11415.51, stdev=2813.35 00:19:06.264 clat percentiles (msec): 00:19:06.264 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 8557], 00:19:06.264 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:19:06.264 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:06.264 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.264 | 99.99th=[12953] 00:19:06.264 lat (msec) : >=2000=100.00% 00:19:06.264 cpu : usr=0.00%, sys=0.42%, ctx=76, majf=0, minf=15361 00:19:06.264 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:19:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.264 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 job3: (groupid=0, jobs=1): err= 0: pid=459868: Wed Nov 6 08:55:27 2024 00:19:06.264 read: IOPS=47, BW=47.6MiB/s (49.9MB/s)(519MiB/10897msec) 00:19:06.264 slat (usec): min=47, max=2071.3k, avg=19267.74, stdev=166784.63 00:19:06.264 clat (msec): min=379, max=8669, avg=2213.57, stdev=2745.95 00:19:06.264 lat (msec): min=382, max=8769, avg=2232.84, stdev=2759.83 00:19:06.264 clat percentiles (msec): 00:19:06.264 | 1.00th=[ 418], 5.00th=[ 422], 10.00th=[ 426], 20.00th=[ 426], 00:19:06.264 | 30.00th=[ 430], 40.00th=[ 430], 50.00th=[ 451], 60.00th=[ 1083], 00:19:06.264 | 70.00th=[ 1250], 80.00th=[ 6141], 90.00th=[ 6342], 95.00th=[ 8658], 00:19:06.264 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:19:06.264 | 99.99th=[ 8658] 00:19:06.264 bw ( KiB/s): min= 2048, max=286147, per=4.81%, avg=133707.17, stdev=143452.47, samples=6 00:19:06.264 iops : min= 2, max= 279, avg=130.50, stdev=140.00, samples=6 00:19:06.264 lat (msec) : 500=50.48%, 1000=6.17%, 2000=17.53%, >=2000=25.82% 00:19:06.264 cpu : usr=0.01%, sys=1.15%, ctx=508, majf=0, minf=32769 00:19:06.264 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:19:06.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.264 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.264 issued rwts: total=519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.264 job3: (groupid=0, jobs=1): err= 0: pid=459869: Wed Nov 6 08:55:27 2024 00:19:06.264 read: IOPS=4, BW=4604KiB/s (4715kB/s)(58.0MiB/12900msec) 00:19:06.264 slat (usec): min=485, max=2081.0k, avg=185876.00, stdev=580432.02 00:19:06.264 clat (msec): min=2118, max=12898, avg=10572.51, stdev=3035.34 00:19:06.264 lat (msec): min=4174, max=12899, avg=10758.39, stdev=2831.84 00:19:06.264 clat percentiles (msec): 00:19:06.264 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 8490], 00:19:06.264 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:19:06.264 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:19:06.265 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.265 | 99.99th=[12953] 00:19:06.265 lat (msec) : >=2000=100.00% 00:19:06.265 cpu : usr=0.00%, sys=0.33%, ctx=96, majf=0, minf=14849 00:19:06.265 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:19:06.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.265 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.265 job4: (groupid=0, jobs=1): err= 0: pid=459870: Wed Nov 6 08:55:27 2024 00:19:06.265 read: IOPS=7, BW=7905KiB/s (8095kB/s)(84.0MiB/10881msec) 00:19:06.265 slat (usec): min=512, max=2056.2k, avg=128123.40, stdev=481792.17 00:19:06.265 clat (msec): min=117, max=10879, avg=7371.86, stdev=3424.64 00:19:06.265 lat (msec): min=2128, max=10880, avg=7499.99, stdev=3350.50 00:19:06.265 clat percentiles (msec): 00:19:06.265 | 1.00th=[ 118], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:19:06.265 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[10671], 00:19:06.265 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10939], 00:19:06.265 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:19:06.265 | 99.99th=[10939] 00:19:06.265 lat (msec) : 250=1.19%, >=2000=98.81% 00:19:06.265 cpu : usr=0.01%, sys=0.60%, ctx=79, majf=0, minf=21505 00:19:06.265 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.5%, 16=19.0%, 32=38.1%, >=64=25.0% 00:19:06.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.265 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.265 job4: (groupid=0, jobs=1): err= 0: pid=459871: Wed Nov 6 08:55:27 2024 00:19:06.265 read: IOPS=2, BW=2859KiB/s (2928kB/s)(36.0MiB/12894msec) 00:19:06.265 slat (usec): min=761, max=2099.8k, avg=299538.82, stdev=722887.06 00:19:06.265 clat (msec): min=2109, max=12890, avg=9494.79, stdev=3631.17 00:19:06.265 lat (msec): min=4179, max=12892, avg=9794.33, stdev=3444.51 00:19:06.265 clat percentiles (msec): 00:19:06.265 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:19:06.265 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12818], 00:19:06.265 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12953], 00:19:06.265 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:06.265 | 99.99th=[12953] 00:19:06.265 lat (msec) : >=2000=100.00% 00:19:06.265 cpu : usr=0.01%, sys=0.23%, ctx=49, majf=0, minf=9217 00:19:06.265 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:19:06.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.265 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.265 job4: (groupid=0, jobs=1): err= 0: pid=459872: Wed Nov 6 08:55:27 2024 00:19:06.265 read: IOPS=2, BW=2268KiB/s (2323kB/s)(24.0MiB/10834msec) 00:19:06.265 slat (usec): min=461, max=2085.6k, avg=447513.09, stdev=851345.27 00:19:06.265 clat (msec): min=92, max=10826, avg=6842.79, stdev=3443.87 00:19:06.265 lat (msec): min=2158, max=10833, avg=7290.31, stdev=3219.10 00:19:06.265 clat percentiles (msec): 00:19:06.265 | 1.00th=[ 93], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 2232], 00:19:06.265 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 6544], 00:19:06.265 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:19:06.265 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:06.265 | 99.99th=[10805] 00:19:06.265 lat (msec) : 100=4.17%, >=2000=95.83% 00:19:06.265 cpu : usr=0.01%, sys=0.15%, ctx=78, majf=0, minf=6145 00:19:06.265 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:19:06.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.265 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.265 job4: (groupid=0, jobs=1): err= 0: pid=459873: Wed Nov 6 08:55:27 2024 00:19:06.265 read: IOPS=6, BW=6385KiB/s (6539kB/s)(68.0MiB/10905msec) 00:19:06.265 slat (usec): min=760, max=2066.5k, avg=158716.66, stdev=535454.30 00:19:06.265 clat (msec): min=111, max=10902, avg=9035.18, stdev=3007.29 00:19:06.265 lat (msec): min=2158, max=10904, avg=9193.90, stdev=2807.44 00:19:06.265 clat percentiles (msec): 00:19:06.265 | 1.00th=[ 111], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6477], 00:19:06.265 | 30.00th=[ 8658], 40.00th=[10805], 50.00th=[10805], 60.00th=[10805], 00:19:06.265 | 70.00th=[10805], 80.00th=[10939], 90.00th=[10939], 95.00th=[10939], 00:19:06.265 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:19:06.265 | 99.99th=[10939] 00:19:06.265 lat (msec) : 250=1.47%, >=2000=98.53% 00:19:06.265 cpu : usr=0.00%, sys=0.57%, ctx=112, majf=0, minf=17409 00:19:06.265 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:19:06.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:06.265 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.265 job4: (groupid=0, jobs=1): err= 0: pid=459874: Wed Nov 6 08:55:27 2024 00:19:06.265 read: IOPS=56, BW=56.7MiB/s (59.4MB/s)(611MiB/10777msec) 00:19:06.265 slat (usec): min=458, max=2088.2k, avg=17442.96, stdev=143853.60 00:19:06.265 clat (msec): min=114, max=7004, avg=2085.17, stdev=2340.86 00:19:06.265 lat (msec): min=515, max=7006, avg=2102.61, stdev=2345.06 00:19:06.265 clat percentiles (msec): 00:19:06.265 | 1.00th=[ 514], 5.00th=[ 518], 10.00th=[ 531], 20.00th=[ 651], 00:19:06.265 | 30.00th=[ 785], 40.00th=[ 810], 50.00th=[ 953], 60.00th=[ 1150], 00:19:06.265 | 70.00th=[ 1267], 80.00th=[ 4396], 90.00th=[ 6745], 95.00th=[ 6879], 00:19:06.265 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:19:06.265 | 99.99th=[ 7013] 00:19:06.265 bw ( KiB/s): min= 2052, max=253952, per=3.56%, avg=99067.10, stdev=85690.55, samples=10 00:19:06.265 iops : min= 2, max= 248, avg=96.60, stdev=83.61, samples=10 00:19:06.265 lat (msec) : 250=0.16%, 750=26.35%, 1000=26.84%, 2000=23.90%, >=2000=22.75% 00:19:06.265 cpu : usr=0.04%, sys=1.74%, ctx=1110, majf=0, minf=32769 00:19:06.265 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:19:06.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.265 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:06.265 issued rwts: total=611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.265 job4: (groupid=0, jobs=1): err= 0: pid=459875: Wed Nov 6 08:55:27 2024 00:19:06.265 read: IOPS=32, BW=32.9MiB/s (34.5MB/s)(422MiB/12818msec) 00:19:06.265 slat (usec): min=81, max=2073.3k, avg=25341.93, stdev=198822.68 00:19:06.265 clat (msec): min=663, max=11184, avg=3741.93, stdev=4321.45 00:19:06.265 lat (msec): min=668, max=11189, avg=3767.27, stdev=4333.54 00:19:06.265 clat percentiles (msec): 00:19:06.265 | 1.00th=[ 667], 5.00th=[ 667], 10.00th=[ 676], 20.00th=[ 676], 00:19:06.265 | 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 693], 60.00th=[ 718], 00:19:06.265 | 70.00th=[ 6342], 80.00th=[10671], 90.00th=[10939], 95.00th=[11073], 00:19:06.265 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:19:06.265 | 99.99th=[11208] 00:19:06.265 bw ( KiB/s): min= 2052, max=192127, per=2.71%, avg=75424.75, stdev=77794.84, samples=8 00:19:06.265 iops : min= 2, max= 187, avg=73.50, stdev=75.71, samples=8 00:19:06.265 lat (msec) : 750=62.56%, >=2000=37.44% 00:19:06.265 cpu : usr=0.04%, sys=1.05%, ctx=362, majf=0, minf=32769 00:19:06.265 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.1% 00:19:06.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.265 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.265 issued rwts: total=422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.265 job4: (groupid=0, jobs=1): err= 0: pid=459876: Wed Nov 6 08:55:27 2024 00:19:06.265 read: IOPS=28, BW=28.7MiB/s (30.0MB/s)(310MiB/10820msec) 00:19:06.265 slat (usec): min=36, max=2078.2k, avg=34540.16, stdev=236344.50 00:19:06.265 clat (msec): min=110, max=8612, avg=2904.88, stdev=2386.62 00:19:06.265 lat (msec): min=555, max=8628, avg=2939.42, stdev=2400.12 00:19:06.265 clat percentiles (msec): 00:19:06.265 | 1.00th=[ 558], 5.00th=[ 567], 10.00th=[ 575], 20.00th=[ 625], 00:19:06.265 | 30.00th=[ 625], 40.00th=[ 634], 50.00th=[ 1318], 60.00th=[ 4866], 00:19:06.265 | 70.00th=[ 5269], 80.00th=[ 5403], 90.00th=[ 5537], 95.00th=[ 5604], 00:19:06.265 | 99.00th=[ 6544], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:19:06.265 | 99.99th=[ 8658] 00:19:06.265 bw ( KiB/s): min= 2052, max=192512, per=2.25%, avg=62464.67, stdev=74617.97, samples=6 00:19:06.265 iops : min= 2, max= 188, avg=61.00, stdev=72.87, samples=6 00:19:06.265 lat (msec) : 250=0.32%, 750=49.35%, 2000=0.97%, >=2000=49.35% 00:19:06.265 cpu : usr=0.01%, sys=0.80%, ctx=269, majf=0, minf=32769 00:19:06.265 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.3%, >=64=79.7% 00:19:06.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.265 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:06.265 issued rwts: total=310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.265 job4: (groupid=0, jobs=1): err= 0: pid=459877: Wed Nov 6 08:55:27 2024 00:19:06.265 read: IOPS=1, BW=1273KiB/s (1304kB/s)(16.0MiB/12869msec) 00:19:06.265 slat (msec): min=8, max=2118, avg=672.52, stdev=981.44 00:19:06.265 clat (msec): min=2108, max=12748, avg=6771.12, stdev=2937.59 00:19:06.265 lat (msec): min=4169, max=12868, avg=7443.64, stdev=3029.26 00:19:06.265 clat percentiles (msec): 00:19:06.265 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4178], 20.00th=[ 4245], 00:19:06.265 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6342], 60.00th=[ 6409], 00:19:06.265 | 70.00th=[ 8490], 80.00th=[ 8490], 90.00th=[10671], 95.00th=[12684], 00:19:06.266 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:19:06.266 | 99.99th=[12684] 00:19:06.266 lat (msec) : >=2000=100.00% 00:19:06.266 cpu : usr=0.00%, sys=0.10%, ctx=38, majf=0, minf=4097 00:19:06.266 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.266 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.266 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.266 job4: (groupid=0, jobs=1): err= 0: pid=459878: Wed Nov 6 08:55:27 2024 00:19:06.266 read: IOPS=3, BW=3108KiB/s (3183kB/s)(33.0MiB/10871msec) 00:19:06.266 slat (usec): min=1088, max=2075.1k, avg=326084.31, stdev=742118.56 00:19:06.266 clat (msec): min=109, max=10867, avg=7540.92, stdev=3527.34 00:19:06.266 lat (msec): min=2157, max=10870, avg=7867.00, stdev=3309.57 00:19:06.266 clat percentiles (msec): 00:19:06.266 | 1.00th=[ 110], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:19:06.266 | 30.00th=[ 4396], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10671], 00:19:06.266 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:19:06.266 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:06.266 | 99.99th=[10805] 00:19:06.266 lat (msec) : 250=3.03%, >=2000=96.97% 00:19:06.266 cpu : usr=0.00%, sys=0.23%, ctx=88, majf=0, minf=8449 00:19:06.266 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:19:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.266 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.266 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.266 job4: (groupid=0, jobs=1): err= 0: pid=459879: Wed Nov 6 08:55:27 2024 00:19:06.266 read: IOPS=57, BW=57.2MiB/s (60.0MB/s)(624MiB/10901msec) 00:19:06.266 slat (usec): min=277, max=2088.2k, avg=17277.13, stdev=142118.80 00:19:06.266 clat (msec): min=114, max=7036, avg=2094.92, stdev=2309.39 00:19:06.266 lat (msec): min=554, max=7038, avg=2112.20, stdev=2313.57 00:19:06.266 clat percentiles (msec): 00:19:06.266 | 1.00th=[ 550], 5.00th=[ 558], 10.00th=[ 575], 20.00th=[ 701], 00:19:06.266 | 30.00th=[ 793], 40.00th=[ 877], 50.00th=[ 1036], 60.00th=[ 1167], 00:19:06.266 | 70.00th=[ 1284], 80.00th=[ 4329], 90.00th=[ 6745], 95.00th=[ 6879], 00:19:06.266 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:19:06.266 | 99.99th=[ 7013] 00:19:06.266 bw ( KiB/s): min=12288, max=224830, per=4.06%, avg=112806.56, stdev=74244.65, samples=9 00:19:06.266 iops : min= 12, max= 219, avg=110.00, stdev=72.50, samples=9 00:19:06.266 lat (msec) : 250=0.16%, 750=24.20%, 1000=23.08%, 2000=30.29%, >=2000=22.28% 00:19:06.266 cpu : usr=0.05%, sys=1.84%, ctx=1143, majf=0, minf=32769 00:19:06.266 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:19:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.266 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:06.266 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.266 job4: (groupid=0, jobs=1): err= 0: pid=459880: Wed Nov 6 08:55:27 2024 00:19:06.266 read: IOPS=117, BW=117MiB/s (123MB/s)(1262MiB/10781msec) 00:19:06.266 slat (usec): min=33, max=2075.5k, avg=8462.36, stdev=87188.22 00:19:06.266 clat (msec): min=92, max=3986, avg=878.13, stdev=1006.79 00:19:06.266 lat (msec): min=200, max=3986, avg=886.59, stdev=1011.28 00:19:06.266 clat percentiles (msec): 00:19:06.266 | 1.00th=[ 201], 5.00th=[ 203], 10.00th=[ 203], 20.00th=[ 213], 00:19:06.266 | 30.00th=[ 326], 40.00th=[ 550], 50.00th=[ 642], 60.00th=[ 676], 00:19:06.266 | 70.00th=[ 760], 80.00th=[ 827], 90.00th=[ 2903], 95.00th=[ 3641], 00:19:06.266 | 99.00th=[ 3943], 99.50th=[ 3977], 99.90th=[ 3977], 99.95th=[ 3977], 00:19:06.266 | 99.99th=[ 3977] 00:19:06.266 bw ( KiB/s): min= 8192, max=542720, per=7.59%, avg=211130.18, stdev=159105.89, samples=11 00:19:06.266 iops : min= 8, max= 530, avg=206.18, stdev=155.38, samples=11 00:19:06.266 lat (msec) : 100=0.08%, 250=25.75%, 500=11.89%, 750=30.67%, 1000=19.49% 00:19:06.266 lat (msec) : 2000=0.32%, >=2000=11.81% 00:19:06.266 cpu : usr=0.07%, sys=1.88%, ctx=1114, majf=0, minf=32769 00:19:06.266 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:19:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.266 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.266 issued rwts: total=1262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.266 job4: (groupid=0, jobs=1): err= 0: pid=459881: Wed Nov 6 08:55:27 2024 00:19:06.266 read: IOPS=283, BW=284MiB/s (298MB/s)(3061MiB/10782msec) 00:19:06.266 slat (usec): min=32, max=2071.2k, avg=3479.62, stdev=52402.47 00:19:06.266 clat (msec): min=115, max=2486, avg=437.27, stdev=595.39 00:19:06.266 lat (msec): min=242, max=2487, avg=440.75, stdev=597.36 00:19:06.266 clat percentiles (msec): 00:19:06.266 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 247], 20.00th=[ 249], 00:19:06.266 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 253], 00:19:06.266 | 70.00th=[ 257], 80.00th=[ 262], 90.00th=[ 351], 95.00th=[ 2433], 00:19:06.266 | 99.00th=[ 2467], 99.50th=[ 2467], 99.90th=[ 2467], 99.95th=[ 2500], 00:19:06.266 | 99.99th=[ 2500] 00:19:06.266 bw ( KiB/s): min= 2052, max=528384, per=13.51%, avg=375493.94, stdev=211761.31, samples=16 00:19:06.266 iops : min= 2, max= 516, avg=366.56, stdev=206.87, samples=16 00:19:06.266 lat (msec) : 250=41.29%, 500=50.41%, >=2000=8.30% 00:19:06.266 cpu : usr=0.07%, sys=2.88%, ctx=2760, majf=0, minf=32770 00:19:06.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:19:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.266 issued rwts: total=3061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.266 job4: (groupid=0, jobs=1): err= 0: pid=459882: Wed Nov 6 08:55:27 2024 00:19:06.266 read: IOPS=2, BW=3032KiB/s (3105kB/s)(32.0MiB/10807msec) 00:19:06.266 slat (msec): min=6, max=2066, avg=334.28, stdev=747.90 00:19:06.266 clat (msec): min=109, max=10794, avg=6419.77, stdev=3237.81 00:19:06.266 lat (msec): min=2141, max=10806, avg=6754.04, stdev=3115.22 00:19:06.266 clat percentiles (msec): 00:19:06.266 | 1.00th=[ 110], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2232], 00:19:06.266 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 6544], 60.00th=[ 8658], 00:19:06.266 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10805], 95.00th=[10805], 00:19:06.266 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:06.266 | 99.99th=[10805] 00:19:06.266 lat (msec) : 250=3.12%, >=2000=96.88% 00:19:06.266 cpu : usr=0.00%, sys=0.19%, ctx=77, majf=0, minf=8193 00:19:06.266 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:19:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.266 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.266 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.266 job5: (groupid=0, jobs=1): err= 0: pid=459883: Wed Nov 6 08:55:27 2024 00:19:06.266 read: IOPS=42, BW=42.0MiB/s (44.1MB/s)(453MiB/10776msec) 00:19:06.266 slat (usec): min=41, max=2127.8k, avg=23529.29, stdev=193968.98 00:19:06.266 clat (msec): min=113, max=9068, avg=2922.91, stdev=3523.49 00:19:06.266 lat (msec): min=523, max=9107, avg=2946.44, stdev=3530.68 00:19:06.266 clat percentiles (msec): 00:19:06.266 | 1.00th=[ 523], 5.00th=[ 527], 10.00th=[ 558], 20.00th=[ 592], 00:19:06.266 | 30.00th=[ 642], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 709], 00:19:06.266 | 70.00th=[ 2668], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 9060], 00:19:06.266 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:19:06.266 | 99.99th=[ 9060] 00:19:06.266 bw ( KiB/s): min= 2048, max=253445, per=3.00%, avg=83393.12, stdev=95716.59, samples=8 00:19:06.266 iops : min= 2, max= 247, avg=81.38, stdev=93.35, samples=8 00:19:06.266 lat (msec) : 250=0.22%, 750=66.89%, 1000=0.44%, >=2000=32.45% 00:19:06.266 cpu : usr=0.05%, sys=1.16%, ctx=380, majf=0, minf=32769 00:19:06.266 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:19:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.266 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.266 issued rwts: total=453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.266 job5: (groupid=0, jobs=1): err= 0: pid=459884: Wed Nov 6 08:55:27 2024 00:19:06.266 read: IOPS=4, BW=4409KiB/s (4514kB/s)(47.0MiB/10917msec) 00:19:06.266 slat (usec): min=1107, max=2127.1k, avg=229852.56, stdev=649184.43 00:19:06.266 clat (msec): min=112, max=10912, avg=9655.59, stdev=2767.81 00:19:06.266 lat (msec): min=2184, max=10916, avg=9885.45, stdev=2379.43 00:19:06.266 clat percentiles (msec): 00:19:06.266 | 1.00th=[ 113], 5.00th=[ 2265], 10.00th=[ 4396], 20.00th=[10671], 00:19:06.266 | 30.00th=[10805], 40.00th=[10805], 50.00th=[10805], 60.00th=[10939], 00:19:06.266 | 70.00th=[10939], 80.00th=[10939], 90.00th=[10939], 95.00th=[10939], 00:19:06.266 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:19:06.266 | 99.99th=[10939] 00:19:06.266 lat (msec) : 250=2.13%, >=2000=97.87% 00:19:06.266 cpu : usr=0.00%, sys=0.41%, ctx=95, majf=0, minf=12033 00:19:06.266 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:19:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.266 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:06.266 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.266 job5: (groupid=0, jobs=1): err= 0: pid=459885: Wed Nov 6 08:55:27 2024 00:19:06.266 read: IOPS=1, BW=1137KiB/s (1164kB/s)(12.0MiB/10806msec) 00:19:06.266 slat (usec): min=551, max=2123.1k, avg=890399.22, stdev=1056141.52 00:19:06.266 clat (msec): min=120, max=10731, avg=5771.13, stdev=3185.66 00:19:06.266 lat (msec): min=2180, max=10805, avg=6661.53, stdev=2946.97 00:19:06.266 clat percentiles (msec): 00:19:06.266 | 1.00th=[ 121], 5.00th=[ 121], 10.00th=[ 2165], 20.00th=[ 2232], 00:19:06.266 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 6477], 60.00th=[ 6544], 00:19:06.266 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[ 8658], 95.00th=[10671], 00:19:06.266 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:06.266 | 99.99th=[10671] 00:19:06.266 lat (msec) : 250=8.33%, >=2000=91.67% 00:19:06.266 cpu : usr=0.00%, sys=0.06%, ctx=51, majf=0, minf=3073 00:19:06.267 IO depths : 1=8.3%, 2=16.7%, 4=33.3%, 8=41.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.267 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.267 issued rwts: total=12,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.267 job5: (groupid=0, jobs=1): err= 0: pid=459886: Wed Nov 6 08:55:27 2024 00:19:06.267 read: IOPS=2, BW=2738KiB/s (2804kB/s)(29.0MiB/10846msec) 00:19:06.267 slat (usec): min=871, max=2094.0k, avg=369810.50, stdev=788167.60 00:19:06.267 clat (msec): min=120, max=10844, avg=6571.08, stdev=3365.80 00:19:06.267 lat (msec): min=2170, max=10845, avg=6940.89, stdev=3217.68 00:19:06.267 clat percentiles (msec): 00:19:06.267 | 1.00th=[ 122], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 2232], 00:19:06.267 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8557], 00:19:06.267 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:19:06.267 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:06.267 | 99.99th=[10805] 00:19:06.267 lat (msec) : 250=3.45%, >=2000=96.55% 00:19:06.267 cpu : usr=0.00%, sys=0.19%, ctx=66, majf=0, minf=7425 00:19:06.267 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:19:06.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.267 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:06.267 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.267 job5: (groupid=0, jobs=1): err= 0: pid=459887: Wed Nov 6 08:55:27 2024 00:19:06.267 read: IOPS=56, BW=56.3MiB/s (59.0MB/s)(609MiB/10816msec) 00:19:06.267 slat (usec): min=31, max=2136.3k, avg=17569.03, stdev=167044.34 00:19:06.267 clat (msec): min=112, max=6992, avg=1795.61, stdev=2551.26 00:19:06.267 lat (msec): min=377, max=6994, avg=1813.17, stdev=2557.35 00:19:06.267 clat percentiles (msec): 00:19:06.267 | 1.00th=[ 380], 5.00th=[ 397], 10.00th=[ 409], 20.00th=[ 414], 00:19:06.267 | 30.00th=[ 418], 40.00th=[ 422], 50.00th=[ 430], 60.00th=[ 443], 00:19:06.267 | 70.00th=[ 464], 80.00th=[ 6544], 90.00th=[ 6745], 95.00th=[ 6879], 00:19:06.267 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 7013], 99.95th=[ 7013], 00:19:06.267 | 99.99th=[ 7013] 00:19:06.267 bw ( KiB/s): min= 2048, max=317440, per=5.07%, avg=141020.00, stdev=140852.66, samples=7 00:19:06.267 iops : min= 2, max= 310, avg=137.71, stdev=137.55, samples=7 00:19:06.267 lat (msec) : 250=0.16%, 500=75.21%, 750=0.99%, >=2000=23.65% 00:19:06.267 cpu : usr=0.02%, sys=1.00%, ctx=1064, majf=0, minf=32769 00:19:06.267 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.7% 00:19:06.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.267 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:06.267 issued rwts: total=609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.267 job5: (groupid=0, jobs=1): err= 0: pid=459888: Wed Nov 6 08:55:27 2024 00:19:06.267 read: IOPS=13, BW=13.8MiB/s (14.4MB/s)(149MiB/10833msec) 00:19:06.267 slat (usec): min=105, max=2073.9k, avg=71873.18, stdev=357921.83 00:19:06.267 clat (msec): min=122, max=10724, avg=8229.06, stdev=2991.76 00:19:06.267 lat (msec): min=2137, max=10727, avg=8300.93, stdev=2922.98 00:19:06.267 clat percentiles (msec): 00:19:06.267 | 1.00th=[ 2140], 5.00th=[ 2232], 10.00th=[ 4279], 20.00th=[ 4396], 00:19:06.267 | 30.00th=[ 6544], 40.00th=[ 8557], 50.00th=[10402], 60.00th=[10537], 00:19:06.267 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:19:06.267 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:06.267 | 99.99th=[10671] 00:19:06.267 bw ( KiB/s): min= 2052, max=26624, per=0.40%, avg=11259.75, stdev=10771.83, samples=4 00:19:06.267 iops : min= 2, max= 26, avg=10.75, stdev=10.56, samples=4 00:19:06.267 lat (msec) : 250=0.67%, >=2000=99.33% 00:19:06.267 cpu : usr=0.00%, sys=0.90%, ctx=116, majf=0, minf=32769 00:19:06.267 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=5.4%, 16=10.7%, 32=21.5%, >=64=57.7% 00:19:06.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.267 complete : 0=0.0%, 4=95.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.3% 00:19:06.267 issued rwts: total=149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.267 job5: (groupid=0, jobs=1): err= 0: pid=459889: Wed Nov 6 08:55:27 2024 00:19:06.267 read: IOPS=151, BW=151MiB/s (158MB/s)(1631MiB/10799msec) 00:19:06.267 slat (usec): min=36, max=2063.1k, avg=6558.96, stdev=100861.95 00:19:06.267 clat (msec): min=97, max=6734, avg=440.05, stdev=981.38 00:19:06.267 lat (msec): min=122, max=6738, avg=446.61, stdev=994.65 00:19:06.267 clat percentiles (msec): 00:19:06.267 | 1.00th=[ 123], 5.00th=[ 124], 10.00th=[ 124], 20.00th=[ 124], 00:19:06.267 | 30.00th=[ 125], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 127], 00:19:06.267 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 264], 95.00th=[ 2299], 00:19:06.267 | 99.00th=[ 6611], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:19:06.267 | 99.99th=[ 6745] 00:19:06.267 bw ( KiB/s): min= 2052, max=1048576, per=18.45%, avg=513017.83, stdev=455704.75, samples=6 00:19:06.267 iops : min= 2, max= 1024, avg=500.83, stdev=444.80, samples=6 00:19:06.267 lat (msec) : 100=0.06%, 250=78.23%, 500=11.83%, >=2000=9.87% 00:19:06.267 cpu : usr=0.03%, sys=1.47%, ctx=2060, majf=0, minf=32769 00:19:06.267 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:19:06.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.267 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.267 issued rwts: total=1631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.267 job5: (groupid=0, jobs=1): err= 0: pid=459890: Wed Nov 6 08:55:27 2024 00:19:06.267 read: IOPS=108, BW=109MiB/s (114MB/s)(1179MiB/10853msec) 00:19:06.267 slat (usec): min=49, max=2134.9k, avg=9101.43, stdev=121395.29 00:19:06.267 clat (msec): min=119, max=6742, avg=566.11, stdev=1019.86 00:19:06.267 lat (msec): min=187, max=6744, avg=575.21, stdev=1036.64 00:19:06.267 clat percentiles (msec): 00:19:06.267 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 207], 00:19:06.267 | 30.00th=[ 215], 40.00th=[ 224], 50.00th=[ 228], 60.00th=[ 236], 00:19:06.267 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 2333], 95.00th=[ 2433], 00:19:06.267 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:19:06.267 | 99.99th=[ 6745] 00:19:06.267 bw ( KiB/s): min=86188, max=584849, per=15.51%, avg=431126.20, stdev=216649.60, samples=5 00:19:06.267 iops : min= 84, max= 571, avg=420.80, stdev=211.66, samples=5 00:19:06.267 lat (msec) : 250=79.30%, 500=7.89%, >=2000=12.81% 00:19:06.267 cpu : usr=0.02%, sys=1.18%, ctx=2026, majf=0, minf=32769 00:19:06.267 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7% 00:19:06.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.267 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.267 issued rwts: total=1179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.267 job5: (groupid=0, jobs=1): err= 0: pid=459891: Wed Nov 6 08:55:27 2024 00:19:06.267 read: IOPS=46, BW=46.0MiB/s (48.3MB/s)(498MiB/10819msec) 00:19:06.267 slat (usec): min=94, max=2059.7k, avg=21490.83, stdev=181658.02 00:19:06.267 clat (msec): min=113, max=6952, avg=2222.45, stdev=2490.85 00:19:06.267 lat (msec): min=530, max=6962, avg=2243.94, stdev=2495.09 00:19:06.267 clat percentiles (msec): 00:19:06.267 | 1.00th=[ 531], 5.00th=[ 550], 10.00th=[ 575], 20.00th=[ 600], 00:19:06.267 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 634], 60.00th=[ 651], 00:19:06.267 | 70.00th=[ 2635], 80.00th=[ 6544], 90.00th=[ 6745], 95.00th=[ 6812], 00:19:06.267 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:19:06.267 | 99.99th=[ 6946] 00:19:06.267 bw ( KiB/s): min= 2048, max=217088, per=3.42%, avg=94976.50, stdev=91102.95, samples=8 00:19:06.267 iops : min= 2, max= 212, avg=92.75, stdev=88.97, samples=8 00:19:06.267 lat (msec) : 250=0.20%, 750=66.06%, >=2000=33.73% 00:19:06.267 cpu : usr=0.05%, sys=0.93%, ctx=1069, majf=0, minf=32769 00:19:06.267 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.3% 00:19:06.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.267 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.267 issued rwts: total=498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.267 job5: (groupid=0, jobs=1): err= 0: pid=459892: Wed Nov 6 08:55:27 2024 00:19:06.267 read: IOPS=46, BW=46.8MiB/s (49.1MB/s)(506MiB/10806msec) 00:19:06.267 slat (usec): min=369, max=2064.5k, avg=21129.02, stdev=184474.87 00:19:06.267 clat (msec): min=113, max=6611, avg=1289.39, stdev=1639.93 00:19:06.267 lat (msec): min=224, max=6714, avg=1310.52, stdev=1668.74 00:19:06.267 clat percentiles (msec): 00:19:06.267 | 1.00th=[ 226], 5.00th=[ 232], 10.00th=[ 234], 20.00th=[ 236], 00:19:06.267 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:19:06.267 | 70.00th=[ 2165], 80.00th=[ 3473], 90.00th=[ 3540], 95.00th=[ 3608], 00:19:06.268 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6611], 99.95th=[ 6611], 00:19:06.268 | 99.99th=[ 6611] 00:19:06.268 bw ( KiB/s): min= 2052, max=454656, per=6.98%, avg=194049.00, stdev=225882.37, samples=4 00:19:06.268 iops : min= 2, max= 444, avg=189.50, stdev=220.59, samples=4 00:19:06.268 lat (msec) : 250=64.43%, 500=4.74%, 2000=0.79%, >=2000=30.04% 00:19:06.268 cpu : usr=0.00%, sys=0.77%, ctx=1000, majf=0, minf=32769 00:19:06.268 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:19:06.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.268 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:06.268 issued rwts: total=506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.268 job5: (groupid=0, jobs=1): err= 0: pid=459893: Wed Nov 6 08:55:27 2024 00:19:06.268 read: IOPS=54, BW=54.1MiB/s (56.7MB/s)(583MiB/10786msec) 00:19:06.268 slat (usec): min=38, max=2047.8k, avg=18287.36, stdev=149056.02 00:19:06.268 clat (msec): min=120, max=5890, avg=1881.40, stdev=1950.77 00:19:06.268 lat (msec): min=634, max=5925, avg=1899.69, stdev=1954.23 00:19:06.268 clat percentiles (msec): 00:19:06.268 | 1.00th=[ 634], 5.00th=[ 642], 10.00th=[ 651], 20.00th=[ 676], 00:19:06.268 | 30.00th=[ 726], 40.00th=[ 768], 50.00th=[ 785], 60.00th=[ 827], 00:19:06.268 | 70.00th=[ 927], 80.00th=[ 4279], 90.00th=[ 5604], 95.00th=[ 5738], 00:19:06.268 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:19:06.268 | 99.99th=[ 5873] 00:19:06.268 bw ( KiB/s): min= 2052, max=204800, per=3.36%, avg=93388.30, stdev=89271.84, samples=10 00:19:06.268 iops : min= 2, max= 200, avg=91.10, stdev=87.29, samples=10 00:19:06.268 lat (msec) : 250=0.17%, 750=32.42%, 1000=39.11%, 2000=2.57%, >=2000=25.73% 00:19:06.268 cpu : usr=0.03%, sys=1.18%, ctx=563, majf=0, minf=32769 00:19:06.268 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:19:06.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.268 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:06.268 issued rwts: total=583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.268 job5: (groupid=0, jobs=1): err= 0: pid=459894: Wed Nov 6 08:55:27 2024 00:19:06.268 read: IOPS=82, BW=82.2MiB/s (86.2MB/s)(889MiB/10815msec) 00:19:06.268 slat (usec): min=380, max=2176.5k, avg=12027.20, stdev=140704.60 00:19:06.268 clat (msec): min=119, max=4610, avg=628.57, stdev=794.36 00:19:06.268 lat (msec): min=258, max=6786, avg=640.59, stdev=822.25 00:19:06.268 clat percentiles (msec): 00:19:06.268 | 1.00th=[ 257], 5.00th=[ 259], 10.00th=[ 262], 20.00th=[ 271], 00:19:06.268 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 300], 60.00th=[ 317], 00:19:06.268 | 70.00th=[ 330], 80.00th=[ 368], 90.00th=[ 2366], 95.00th=[ 2433], 00:19:06.268 | 99.00th=[ 2534], 99.50th=[ 2534], 99.90th=[ 4597], 99.95th=[ 4597], 00:19:06.268 | 99.99th=[ 4597] 00:19:06.268 bw ( KiB/s): min= 2052, max=474187, per=9.35%, avg=259938.50, stdev=218970.14, samples=6 00:19:06.268 iops : min= 2, max= 463, avg=253.83, stdev=213.82, samples=6 00:19:06.268 lat (msec) : 250=0.11%, 500=84.59%, >=2000=15.30% 00:19:06.268 cpu : usr=0.02%, sys=1.06%, ctx=2012, majf=0, minf=32769 00:19:06.268 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:19:06.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.268 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.268 issued rwts: total=889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.268 job5: (groupid=0, jobs=1): err= 0: pid=459895: Wed Nov 6 08:55:27 2024 00:19:06.268 read: IOPS=194, BW=195MiB/s (204MB/s)(1948MiB/10012msec) 00:19:06.268 slat (usec): min=46, max=2156.7k, avg=5132.53, stdev=90459.43 00:19:06.268 clat (msec): min=10, max=6551, avg=457.77, stdev=1138.49 00:19:06.268 lat (msec): min=11, max=6570, avg=462.90, stdev=1149.40 00:19:06.268 clat percentiles (msec): 00:19:06.268 | 1.00th=[ 26], 5.00th=[ 84], 10.00th=[ 94], 20.00th=[ 94], 00:19:06.268 | 30.00th=[ 94], 40.00th=[ 95], 50.00th=[ 95], 60.00th=[ 95], 00:19:06.268 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 247], 95.00th=[ 4144], 00:19:06.268 | 99.00th=[ 4463], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 6544], 00:19:06.268 | 99.99th=[ 6544] 00:19:06.268 bw ( KiB/s): min=32768, max=1376256, per=26.86%, avg=746731.80, stdev=628230.92, samples=5 00:19:06.268 iops : min= 32, max= 1344, avg=729.00, stdev=613.34, samples=5 00:19:06.268 lat (msec) : 20=0.67%, 50=2.05%, 100=79.41%, 250=8.16%, 500=0.10% 00:19:06.268 lat (msec) : 2000=0.82%, >=2000=8.78% 00:19:06.268 cpu : usr=0.01%, sys=1.52%, ctx=1865, majf=0, minf=32769 00:19:06.268 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:19:06.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.268 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.268 issued rwts: total=1948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.268 00:19:06.268 Run status group 0 (all jobs): 00:19:06.268 READ: bw=2715MiB/s (2847MB/s), 798KiB/s-284MiB/s (817kB/s-298MB/s), io=34.5GiB (37.0GB), run=10012-13003msec 00:19:06.268 00:19:06.268 Disk stats (read/write): 00:19:06.268 nvme0n1: ios=65466/0, merge=0/0, ticks=6175989/0, in_queue=6175989, util=98.76% 00:19:06.268 nvme1n1: ios=35182/0, merge=0/0, ticks=8608316/0, in_queue=8608316, util=98.96% 00:19:06.268 nvme2n1: ios=25200/0, merge=0/0, ticks=9803600/0, in_queue=9803600, util=99.12% 00:19:06.268 nvme3n1: ios=34270/0, merge=0/0, ticks=9225083/0, in_queue=9225083, util=99.19% 00:19:06.268 nvme4n1: ios=52621/0, merge=0/0, ticks=10281301/0, in_queue=10281301, util=99.15% 00:19:06.268 nvme5n1: ios=67934/0, merge=0/0, ticks=7903361/0, in_queue=7903361, util=98.96% 00:19:06.268 08:55:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:19:06.268 08:55:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:19:06.268 08:55:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:06.268 08:55:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:19:06.268 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:06.268 08:55:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:06.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:06.835 08:55:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:07.769 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:07.769 08:55:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:08.706 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:08.706 08:55:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:09.642 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.642 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:09.901 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.901 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:09.901 08:55:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:10.838 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:10.838 rmmod nvme_rdma 00:19:10.838 rmmod nvme_fabrics 00:19:10.838 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@515 -- # '[' -n 458416 ']' 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # killprocess 458416 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 458416 ']' 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 458416 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 458416 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 458416' 00:19:10.839 killing process with pid 458416 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 458416 00:19:10.839 08:55:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 458416 00:19:11.098 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:11.098 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:11.098 00:19:11.098 real 0m33.135s 00:19:11.098 user 1m54.443s 00:19:11.098 sys 0m13.719s 00:19:11.098 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.098 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:11.098 ************************************ 00:19:11.098 END TEST nvmf_srq_overwhelm 00:19:11.098 ************************************ 00:19:11.098 08:55:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:11.098 08:55:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:11.098 08:55:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.098 08:55:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:11.358 ************************************ 00:19:11.358 START TEST nvmf_shutdown 00:19:11.358 ************************************ 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:11.358 * Looking for test storage... 00:19:11.358 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:19:11.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.358 --rc genhtml_branch_coverage=1 00:19:11.358 --rc genhtml_function_coverage=1 00:19:11.358 --rc genhtml_legend=1 00:19:11.358 --rc geninfo_all_blocks=1 00:19:11.358 --rc geninfo_unexecuted_blocks=1 00:19:11.358 00:19:11.358 ' 00:19:11.358 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:19:11.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.358 --rc genhtml_branch_coverage=1 00:19:11.358 --rc genhtml_function_coverage=1 00:19:11.359 --rc genhtml_legend=1 00:19:11.359 --rc geninfo_all_blocks=1 00:19:11.359 --rc geninfo_unexecuted_blocks=1 00:19:11.359 00:19:11.359 ' 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:19:11.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.359 --rc genhtml_branch_coverage=1 00:19:11.359 --rc genhtml_function_coverage=1 00:19:11.359 --rc genhtml_legend=1 00:19:11.359 --rc geninfo_all_blocks=1 00:19:11.359 --rc geninfo_unexecuted_blocks=1 00:19:11.359 00:19:11.359 ' 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:19:11.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.359 --rc genhtml_branch_coverage=1 00:19:11.359 --rc genhtml_function_coverage=1 00:19:11.359 --rc genhtml_legend=1 00:19:11.359 --rc geninfo_all_blocks=1 00:19:11.359 --rc geninfo_unexecuted_blocks=1 00:19:11.359 00:19:11.359 ' 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:11.359 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.359 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:11.619 ************************************ 00:19:11.619 START TEST nvmf_shutdown_tc1 00:19:11.619 ************************************ 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:11.619 08:55:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:18.190 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:18.190 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:18.190 Found net devices under 0000:da:00.0: mlx_0_0 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:18.190 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:18.191 Found net devices under 0000:da:00.1: mlx_0_1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # rdma_device_init 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:18.191 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:18.191 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:18.191 altname enp218s0f0np0 00:19:18.191 altname ens818f0np0 00:19:18.191 inet 192.168.100.8/24 scope global mlx_0_0 00:19:18.191 valid_lft forever preferred_lft forever 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:18.191 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:18.191 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:18.191 altname enp218s0f1np1 00:19:18.191 altname ens818f1np1 00:19:18.191 inet 192.168.100.9/24 scope global mlx_0_1 00:19:18.191 valid_lft forever preferred_lft forever 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.191 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:18.192 192.168.100.9' 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:18.192 192.168.100.9' 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # head -n 1 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:18.192 192.168.100.9' 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # tail -n +2 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # head -n 1 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=466199 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 466199 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 466199 ']' 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 [2024-11-06 08:55:40.320434] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:18.192 [2024-11-06 08:55:40.320478] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.192 [2024-11-06 08:55:40.397053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.192 [2024-11-06 08:55:40.440425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.192 [2024-11-06 08:55:40.440461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.192 [2024-11-06 08:55:40.440468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.192 [2024-11-06 08:55:40.440474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.192 [2024-11-06 08:55:40.440479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.192 [2024-11-06 08:55:40.442099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.192 [2024-11-06 08:55:40.442222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.192 [2024-11-06 08:55:40.442293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.192 [2024-11-06 08:55:40.442294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 [2024-11-06 08:55:40.604985] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e420a0/0x1e46590) succeed. 00:19:18.192 [2024-11-06 08:55:40.613967] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e43730/0x1e87c30) succeed. 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.192 08:55:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 Malloc1 00:19:18.192 [2024-11-06 08:55:40.849708] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:18.192 Malloc2 00:19:18.192 Malloc3 00:19:18.192 Malloc4 00:19:18.192 Malloc5 00:19:18.192 Malloc6 00:19:18.192 Malloc7 00:19:18.192 Malloc8 00:19:18.192 Malloc9 00:19:18.453 Malloc10 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=466470 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 466470 /var/tmp/bdevperf.sock 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 466470 ']' 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.453 { 00:19:18.453 "params": { 00:19:18.453 "name": "Nvme$subsystem", 00:19:18.453 "trtype": "$TEST_TRANSPORT", 00:19:18.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.453 "adrfam": "ipv4", 00:19:18.453 "trsvcid": "$NVMF_PORT", 00:19:18.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.453 "hdgst": ${hdgst:-false}, 00:19:18.453 "ddgst": ${ddgst:-false} 00:19:18.453 }, 00:19:18.453 "method": "bdev_nvme_attach_controller" 00:19:18.453 } 00:19:18.453 EOF 00:19:18.453 )") 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.453 { 00:19:18.453 "params": { 00:19:18.453 "name": "Nvme$subsystem", 00:19:18.453 "trtype": "$TEST_TRANSPORT", 00:19:18.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.453 "adrfam": "ipv4", 00:19:18.453 "trsvcid": "$NVMF_PORT", 00:19:18.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.453 "hdgst": ${hdgst:-false}, 00:19:18.453 "ddgst": ${ddgst:-false} 00:19:18.453 }, 00:19:18.453 "method": "bdev_nvme_attach_controller" 00:19:18.453 } 00:19:18.453 EOF 00:19:18.453 )") 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.453 { 00:19:18.453 "params": { 00:19:18.453 "name": "Nvme$subsystem", 00:19:18.453 "trtype": "$TEST_TRANSPORT", 00:19:18.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.453 "adrfam": "ipv4", 00:19:18.453 "trsvcid": "$NVMF_PORT", 00:19:18.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.453 "hdgst": ${hdgst:-false}, 00:19:18.453 "ddgst": ${ddgst:-false} 00:19:18.453 }, 00:19:18.453 "method": "bdev_nvme_attach_controller" 00:19:18.453 } 00:19:18.453 EOF 00:19:18.453 )") 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.453 { 00:19:18.453 "params": { 00:19:18.453 "name": "Nvme$subsystem", 00:19:18.453 "trtype": "$TEST_TRANSPORT", 00:19:18.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.453 "adrfam": "ipv4", 00:19:18.453 "trsvcid": "$NVMF_PORT", 00:19:18.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.453 "hdgst": ${hdgst:-false}, 00:19:18.453 "ddgst": ${ddgst:-false} 00:19:18.453 }, 00:19:18.453 "method": "bdev_nvme_attach_controller" 00:19:18.453 } 00:19:18.453 EOF 00:19:18.453 )") 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.453 { 00:19:18.453 "params": { 00:19:18.453 "name": "Nvme$subsystem", 00:19:18.453 "trtype": "$TEST_TRANSPORT", 00:19:18.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.453 "adrfam": "ipv4", 00:19:18.453 "trsvcid": "$NVMF_PORT", 00:19:18.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.453 "hdgst": ${hdgst:-false}, 00:19:18.453 "ddgst": ${ddgst:-false} 00:19:18.453 }, 00:19:18.453 "method": "bdev_nvme_attach_controller" 00:19:18.453 } 00:19:18.453 EOF 00:19:18.453 )") 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.453 { 00:19:18.453 "params": { 00:19:18.453 "name": "Nvme$subsystem", 00:19:18.453 "trtype": "$TEST_TRANSPORT", 00:19:18.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.453 "adrfam": "ipv4", 00:19:18.453 "trsvcid": "$NVMF_PORT", 00:19:18.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.453 "hdgst": ${hdgst:-false}, 00:19:18.453 "ddgst": ${ddgst:-false} 00:19:18.453 }, 00:19:18.453 "method": "bdev_nvme_attach_controller" 00:19:18.453 } 00:19:18.453 EOF 00:19:18.453 )") 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.453 [2024-11-06 08:55:41.323124] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:18.453 [2024-11-06 08:55:41.323174] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.453 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.453 { 00:19:18.453 "params": { 00:19:18.453 "name": "Nvme$subsystem", 00:19:18.453 "trtype": "$TEST_TRANSPORT", 00:19:18.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.453 "adrfam": "ipv4", 00:19:18.453 "trsvcid": "$NVMF_PORT", 00:19:18.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.453 "hdgst": ${hdgst:-false}, 00:19:18.453 "ddgst": ${ddgst:-false} 00:19:18.453 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 } 00:19:18.454 EOF 00:19:18.454 )") 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.454 { 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme$subsystem", 00:19:18.454 "trtype": "$TEST_TRANSPORT", 00:19:18.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "$NVMF_PORT", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.454 "hdgst": ${hdgst:-false}, 00:19:18.454 "ddgst": ${ddgst:-false} 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 } 00:19:18.454 EOF 00:19:18.454 )") 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.454 { 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme$subsystem", 00:19:18.454 "trtype": "$TEST_TRANSPORT", 00:19:18.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "$NVMF_PORT", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.454 "hdgst": ${hdgst:-false}, 00:19:18.454 "ddgst": ${ddgst:-false} 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 } 00:19:18.454 EOF 00:19:18.454 )") 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:18.454 { 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme$subsystem", 00:19:18.454 "trtype": "$TEST_TRANSPORT", 00:19:18.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "$NVMF_PORT", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.454 "hdgst": ${hdgst:-false}, 00:19:18.454 "ddgst": ${ddgst:-false} 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 } 00:19:18.454 EOF 00:19:18.454 )") 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:19:18.454 08:55:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme1", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 },{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme2", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 },{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme3", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 },{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme4", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 },{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme5", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 },{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme6", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 },{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme7", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 },{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme8", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 },{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme9", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 },{ 00:19:18.454 "params": { 00:19:18.454 "name": "Nvme10", 00:19:18.454 "trtype": "rdma", 00:19:18.454 "traddr": "192.168.100.8", 00:19:18.454 "adrfam": "ipv4", 00:19:18.454 "trsvcid": "4420", 00:19:18.454 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:18.454 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:18.454 "hdgst": false, 00:19:18.454 "ddgst": false 00:19:18.454 }, 00:19:18.454 "method": "bdev_nvme_attach_controller" 00:19:18.454 }' 00:19:18.454 [2024-11-06 08:55:41.399543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.454 [2024-11-06 08:55:41.440719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.391 08:55:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.391 08:55:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:19.391 08:55:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:19.391 08:55:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.391 08:55:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:19.391 08:55:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.391 08:55:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 466470 00:19:19.391 08:55:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:19:19.391 08:55:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:19:20.326 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 466470 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 466199 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.326 { 00:19:20.326 "params": { 00:19:20.326 "name": "Nvme$subsystem", 00:19:20.326 "trtype": "$TEST_TRANSPORT", 00:19:20.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.326 "adrfam": "ipv4", 00:19:20.326 "trsvcid": "$NVMF_PORT", 00:19:20.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.326 "hdgst": ${hdgst:-false}, 00:19:20.326 "ddgst": ${ddgst:-false} 00:19:20.326 }, 00:19:20.326 "method": "bdev_nvme_attach_controller" 00:19:20.326 } 00:19:20.326 EOF 00:19:20.326 )") 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.326 { 00:19:20.326 "params": { 00:19:20.326 "name": "Nvme$subsystem", 00:19:20.326 "trtype": "$TEST_TRANSPORT", 00:19:20.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.326 "adrfam": "ipv4", 00:19:20.326 "trsvcid": "$NVMF_PORT", 00:19:20.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.326 "hdgst": ${hdgst:-false}, 00:19:20.326 "ddgst": ${ddgst:-false} 00:19:20.326 }, 00:19:20.326 "method": "bdev_nvme_attach_controller" 00:19:20.326 } 00:19:20.326 EOF 00:19:20.326 )") 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.326 { 00:19:20.326 "params": { 00:19:20.326 "name": "Nvme$subsystem", 00:19:20.326 "trtype": "$TEST_TRANSPORT", 00:19:20.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.326 "adrfam": "ipv4", 00:19:20.326 "trsvcid": "$NVMF_PORT", 00:19:20.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.326 "hdgst": ${hdgst:-false}, 00:19:20.326 "ddgst": ${ddgst:-false} 00:19:20.326 }, 00:19:20.326 "method": "bdev_nvme_attach_controller" 00:19:20.326 } 00:19:20.326 EOF 00:19:20.326 )") 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.326 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.326 { 00:19:20.326 "params": { 00:19:20.326 "name": "Nvme$subsystem", 00:19:20.326 "trtype": "$TEST_TRANSPORT", 00:19:20.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.327 "adrfam": "ipv4", 00:19:20.327 "trsvcid": "$NVMF_PORT", 00:19:20.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.327 "hdgst": ${hdgst:-false}, 00:19:20.327 "ddgst": ${ddgst:-false} 00:19:20.327 }, 00:19:20.327 "method": "bdev_nvme_attach_controller" 00:19:20.327 } 00:19:20.327 EOF 00:19:20.327 )") 00:19:20.327 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.586 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.586 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.586 { 00:19:20.586 "params": { 00:19:20.586 "name": "Nvme$subsystem", 00:19:20.586 "trtype": "$TEST_TRANSPORT", 00:19:20.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.586 "adrfam": "ipv4", 00:19:20.586 "trsvcid": "$NVMF_PORT", 00:19:20.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.586 "hdgst": ${hdgst:-false}, 00:19:20.586 "ddgst": ${ddgst:-false} 00:19:20.586 }, 00:19:20.586 "method": "bdev_nvme_attach_controller" 00:19:20.586 } 00:19:20.586 EOF 00:19:20.586 )") 00:19:20.586 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.586 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.586 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.586 { 00:19:20.586 "params": { 00:19:20.586 "name": "Nvme$subsystem", 00:19:20.586 "trtype": "$TEST_TRANSPORT", 00:19:20.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.586 "adrfam": "ipv4", 00:19:20.586 "trsvcid": "$NVMF_PORT", 00:19:20.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.586 "hdgst": ${hdgst:-false}, 00:19:20.586 "ddgst": ${ddgst:-false} 00:19:20.586 }, 00:19:20.586 "method": "bdev_nvme_attach_controller" 00:19:20.586 } 00:19:20.586 EOF 00:19:20.586 )") 00:19:20.586 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.586 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.586 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.586 { 00:19:20.586 "params": { 00:19:20.586 "name": "Nvme$subsystem", 00:19:20.586 "trtype": "$TEST_TRANSPORT", 00:19:20.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.586 "adrfam": "ipv4", 00:19:20.586 "trsvcid": "$NVMF_PORT", 00:19:20.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.586 "hdgst": ${hdgst:-false}, 00:19:20.586 "ddgst": ${ddgst:-false} 00:19:20.586 }, 00:19:20.586 "method": "bdev_nvme_attach_controller" 00:19:20.586 } 00:19:20.586 EOF 00:19:20.586 )") 00:19:20.586 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.586 [2024-11-06 08:55:43.360232] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:20.587 [2024-11-06 08:55:43.360283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466740 ] 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.587 { 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme$subsystem", 00:19:20.587 "trtype": "$TEST_TRANSPORT", 00:19:20.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "$NVMF_PORT", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.587 "hdgst": ${hdgst:-false}, 00:19:20.587 "ddgst": ${ddgst:-false} 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 } 00:19:20.587 EOF 00:19:20.587 )") 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.587 { 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme$subsystem", 00:19:20.587 "trtype": "$TEST_TRANSPORT", 00:19:20.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "$NVMF_PORT", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.587 "hdgst": ${hdgst:-false}, 00:19:20.587 "ddgst": ${ddgst:-false} 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 } 00:19:20.587 EOF 00:19:20.587 )") 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:20.587 { 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme$subsystem", 00:19:20.587 "trtype": "$TEST_TRANSPORT", 00:19:20.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "$NVMF_PORT", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.587 "hdgst": ${hdgst:-false}, 00:19:20.587 "ddgst": ${ddgst:-false} 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 } 00:19:20.587 EOF 00:19:20.587 )") 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:19:20.587 08:55:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme1", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 },{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme2", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 },{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme3", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 },{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme4", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 },{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme5", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 },{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme6", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 },{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme7", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 },{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme8", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 },{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme9", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 },{ 00:19:20.587 "params": { 00:19:20.587 "name": "Nvme10", 00:19:20.587 "trtype": "rdma", 00:19:20.587 "traddr": "192.168.100.8", 00:19:20.587 "adrfam": "ipv4", 00:19:20.587 "trsvcid": "4420", 00:19:20.587 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:20.587 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:20.587 "hdgst": false, 00:19:20.587 "ddgst": false 00:19:20.587 }, 00:19:20.587 "method": "bdev_nvme_attach_controller" 00:19:20.587 }' 00:19:20.587 [2024-11-06 08:55:43.439323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.587 [2024-11-06 08:55:43.480403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.524 Running I/O for 1 seconds... 00:19:22.901 3348.00 IOPS, 209.25 MiB/s 00:19:22.901 Latency(us) 00:19:22.901 [2024-11-06T07:55:45.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.901 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.901 Verification LBA range: start 0x0 length 0x400 00:19:22.901 Nvme1n1 : 1.17 341.79 21.36 0.00 0.00 184020.28 9175.04 217704.35 00:19:22.901 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.902 Verification LBA range: start 0x0 length 0x400 00:19:22.902 Nvme2n1 : 1.17 353.35 22.08 0.00 0.00 175716.42 9299.87 207717.91 00:19:22.902 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.902 Verification LBA range: start 0x0 length 0x400 00:19:22.902 Nvme3n1 : 1.18 381.02 23.81 0.00 0.00 160815.79 6054.28 145802.00 00:19:22.902 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.902 Verification LBA range: start 0x0 length 0x400 00:19:22.902 Nvme4n1 : 1.18 380.62 23.79 0.00 0.00 158728.92 9924.02 138811.49 00:19:22.902 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.902 Verification LBA range: start 0x0 length 0x400 00:19:22.902 Nvme5n1 : 1.18 380.12 23.76 0.00 0.00 157124.86 10548.18 127826.41 00:19:22.902 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.902 Verification LBA range: start 0x0 length 0x400 00:19:22.902 Nvme6n1 : 1.18 379.71 23.73 0.00 0.00 154691.92 10922.67 120336.58 00:19:22.902 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.902 Verification LBA range: start 0x0 length 0x400 00:19:22.902 Nvme7n1 : 1.18 379.18 23.70 0.00 0.00 153234.63 11671.65 107853.53 00:19:22.902 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.902 Verification LBA range: start 0x0 length 0x400 00:19:22.902 Nvme8n1 : 1.18 378.79 23.67 0.00 0.00 150558.51 12108.56 109351.50 00:19:22.902 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.902 Verification LBA range: start 0x0 length 0x400 00:19:22.902 Nvme9n1 : 1.18 378.22 23.64 0.00 0.00 149468.33 10298.51 117839.97 00:19:22.902 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.902 Verification LBA range: start 0x0 length 0x400 00:19:22.902 Nvme10n1 : 1.17 327.58 20.47 0.00 0.00 169601.87 8987.79 160781.65 00:19:22.902 [2024-11-06T07:55:45.916Z] =================================================================================================================== 00:19:22.902 [2024-11-06T07:55:45.916Z] Total : 3680.38 230.02 0.00 0.00 160920.81 6054.28 217704.35 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:22.902 rmmod nvme_rdma 00:19:22.902 rmmod nvme_fabrics 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 466199 ']' 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 466199 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 466199 ']' 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 466199 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.902 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 466199 00:19:23.161 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:23.161 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:23.161 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 466199' 00:19:23.161 killing process with pid 466199 00:19:23.161 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 466199 00:19:23.161 08:55:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 466199 00:19:23.420 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:23.420 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:23.420 00:19:23.420 real 0m12.010s 00:19:23.420 user 0m28.178s 00:19:23.420 sys 0m5.287s 00:19:23.420 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:23.420 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:23.420 ************************************ 00:19:23.420 END TEST nvmf_shutdown_tc1 00:19:23.420 ************************************ 00:19:23.420 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:23.420 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:23.420 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:23.420 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:23.681 ************************************ 00:19:23.681 START TEST nvmf_shutdown_tc2 00:19:23.681 ************************************ 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:23.681 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:23.681 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:23.681 Found net devices under 0000:da:00.0: mlx_0_0 00:19:23.681 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:23.682 Found net devices under 0000:da:00.1: mlx_0_1 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # rdma_device_init 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:23.682 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.682 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:23.682 altname enp218s0f0np0 00:19:23.682 altname ens818f0np0 00:19:23.682 inet 192.168.100.8/24 scope global mlx_0_0 00:19:23.682 valid_lft forever preferred_lft forever 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:23.682 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.682 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:23.682 altname enp218s0f1np1 00:19:23.682 altname ens818f1np1 00:19:23.682 inet 192.168.100.9/24 scope global mlx_0_1 00:19:23.682 valid_lft forever preferred_lft forever 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:23.682 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:23.683 192.168.100.9' 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:23.683 192.168.100.9' 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # head -n 1 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:23.683 192.168.100.9' 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # tail -n +2 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # head -n 1 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.683 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=467520 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 467520 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 467520 ']' 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:23.943 [2024-11-06 08:55:46.740162] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:23.943 [2024-11-06 08:55:46.740209] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.943 [2024-11-06 08:55:46.815535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.943 [2024-11-06 08:55:46.857477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.943 [2024-11-06 08:55:46.857511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.943 [2024-11-06 08:55:46.857518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.943 [2024-11-06 08:55:46.857524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.943 [2024-11-06 08:55:46.857529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.943 [2024-11-06 08:55:46.859156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.943 [2024-11-06 08:55:46.859264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.943 [2024-11-06 08:55:46.859359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.943 [2024-11-06 08:55:46.859359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.943 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.202 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.202 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:24.202 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.202 08:55:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.203 [2024-11-06 08:55:47.012186] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16a10a0/0x16a5590) succeed. 00:19:24.203 [2024-11-06 08:55:47.021218] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16a2730/0x16e6c30) succeed. 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.203 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.461 Malloc1 00:19:24.461 [2024-11-06 08:55:47.244564] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:24.461 Malloc2 00:19:24.461 Malloc3 00:19:24.461 Malloc4 00:19:24.461 Malloc5 00:19:24.461 Malloc6 00:19:24.721 Malloc7 00:19:24.721 Malloc8 00:19:24.721 Malloc9 00:19:24.721 Malloc10 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=467619 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 467619 /var/tmp/bdevperf.sock 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 467619 ']' 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.721 { 00:19:24.721 "params": { 00:19:24.721 "name": "Nvme$subsystem", 00:19:24.721 "trtype": "$TEST_TRANSPORT", 00:19:24.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.721 "adrfam": "ipv4", 00:19:24.721 "trsvcid": "$NVMF_PORT", 00:19:24.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.721 "hdgst": ${hdgst:-false}, 00:19:24.721 "ddgst": ${ddgst:-false} 00:19:24.721 }, 00:19:24.721 "method": "bdev_nvme_attach_controller" 00:19:24.721 } 00:19:24.721 EOF 00:19:24.721 )") 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.721 { 00:19:24.721 "params": { 00:19:24.721 "name": "Nvme$subsystem", 00:19:24.721 "trtype": "$TEST_TRANSPORT", 00:19:24.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.721 "adrfam": "ipv4", 00:19:24.721 "trsvcid": "$NVMF_PORT", 00:19:24.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.721 "hdgst": ${hdgst:-false}, 00:19:24.721 "ddgst": ${ddgst:-false} 00:19:24.721 }, 00:19:24.721 "method": "bdev_nvme_attach_controller" 00:19:24.721 } 00:19:24.721 EOF 00:19:24.721 )") 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.721 { 00:19:24.721 "params": { 00:19:24.721 "name": "Nvme$subsystem", 00:19:24.721 "trtype": "$TEST_TRANSPORT", 00:19:24.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.721 "adrfam": "ipv4", 00:19:24.721 "trsvcid": "$NVMF_PORT", 00:19:24.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.721 "hdgst": ${hdgst:-false}, 00:19:24.721 "ddgst": ${ddgst:-false} 00:19:24.721 }, 00:19:24.721 "method": "bdev_nvme_attach_controller" 00:19:24.721 } 00:19:24.721 EOF 00:19:24.721 )") 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.721 { 00:19:24.721 "params": { 00:19:24.721 "name": "Nvme$subsystem", 00:19:24.721 "trtype": "$TEST_TRANSPORT", 00:19:24.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.721 "adrfam": "ipv4", 00:19:24.721 "trsvcid": "$NVMF_PORT", 00:19:24.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.721 "hdgst": ${hdgst:-false}, 00:19:24.721 "ddgst": ${ddgst:-false} 00:19:24.721 }, 00:19:24.721 "method": "bdev_nvme_attach_controller" 00:19:24.721 } 00:19:24.721 EOF 00:19:24.721 )") 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.721 { 00:19:24.721 "params": { 00:19:24.721 "name": "Nvme$subsystem", 00:19:24.721 "trtype": "$TEST_TRANSPORT", 00:19:24.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.721 "adrfam": "ipv4", 00:19:24.721 "trsvcid": "$NVMF_PORT", 00:19:24.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.721 "hdgst": ${hdgst:-false}, 00:19:24.721 "ddgst": ${ddgst:-false} 00:19:24.721 }, 00:19:24.721 "method": "bdev_nvme_attach_controller" 00:19:24.721 } 00:19:24.721 EOF 00:19:24.721 )") 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.721 { 00:19:24.721 "params": { 00:19:24.721 "name": "Nvme$subsystem", 00:19:24.721 "trtype": "$TEST_TRANSPORT", 00:19:24.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.721 "adrfam": "ipv4", 00:19:24.721 "trsvcid": "$NVMF_PORT", 00:19:24.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.721 "hdgst": ${hdgst:-false}, 00:19:24.721 "ddgst": ${ddgst:-false} 00:19:24.721 }, 00:19:24.721 "method": "bdev_nvme_attach_controller" 00:19:24.721 } 00:19:24.721 EOF 00:19:24.721 )") 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.721 { 00:19:24.721 "params": { 00:19:24.721 "name": "Nvme$subsystem", 00:19:24.721 "trtype": "$TEST_TRANSPORT", 00:19:24.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.721 "adrfam": "ipv4", 00:19:24.721 "trsvcid": "$NVMF_PORT", 00:19:24.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.721 "hdgst": ${hdgst:-false}, 00:19:24.721 "ddgst": ${ddgst:-false} 00:19:24.721 }, 00:19:24.721 "method": "bdev_nvme_attach_controller" 00:19:24.721 } 00:19:24.721 EOF 00:19:24.721 )") 00:19:24.721 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.721 [2024-11-06 08:55:47.722153] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:24.722 [2024-11-06 08:55:47.722210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467619 ] 00:19:24.722 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.722 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.722 { 00:19:24.722 "params": { 00:19:24.722 "name": "Nvme$subsystem", 00:19:24.722 "trtype": "$TEST_TRANSPORT", 00:19:24.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.722 "adrfam": "ipv4", 00:19:24.722 "trsvcid": "$NVMF_PORT", 00:19:24.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.722 "hdgst": ${hdgst:-false}, 00:19:24.722 "ddgst": ${ddgst:-false} 00:19:24.722 }, 00:19:24.722 "method": "bdev_nvme_attach_controller" 00:19:24.722 } 00:19:24.722 EOF 00:19:24.722 )") 00:19:24.722 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.722 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.722 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.722 { 00:19:24.722 "params": { 00:19:24.722 "name": "Nvme$subsystem", 00:19:24.722 "trtype": "$TEST_TRANSPORT", 00:19:24.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.722 "adrfam": "ipv4", 00:19:24.722 "trsvcid": "$NVMF_PORT", 00:19:24.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.722 "hdgst": ${hdgst:-false}, 00:19:24.722 "ddgst": ${ddgst:-false} 00:19:24.722 }, 00:19:24.722 "method": "bdev_nvme_attach_controller" 00:19:24.722 } 00:19:24.722 EOF 00:19:24.722 )") 00:19:24.722 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.981 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:24.981 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:24.981 { 00:19:24.981 "params": { 00:19:24.981 "name": "Nvme$subsystem", 00:19:24.981 "trtype": "$TEST_TRANSPORT", 00:19:24.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.981 "adrfam": "ipv4", 00:19:24.981 "trsvcid": "$NVMF_PORT", 00:19:24.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.981 "hdgst": ${hdgst:-false}, 00:19:24.981 "ddgst": ${ddgst:-false} 00:19:24.981 }, 00:19:24.981 "method": "bdev_nvme_attach_controller" 00:19:24.981 } 00:19:24.981 EOF 00:19:24.981 )") 00:19:24.981 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:19:24.981 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:19:24.981 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:19:24.981 08:55:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:24.981 "params": { 00:19:24.981 "name": "Nvme1", 00:19:24.981 "trtype": "rdma", 00:19:24.981 "traddr": "192.168.100.8", 00:19:24.981 "adrfam": "ipv4", 00:19:24.981 "trsvcid": "4420", 00:19:24.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.981 "hdgst": false, 00:19:24.981 "ddgst": false 00:19:24.981 }, 00:19:24.981 "method": "bdev_nvme_attach_controller" 00:19:24.981 },{ 00:19:24.981 "params": { 00:19:24.981 "name": "Nvme2", 00:19:24.981 "trtype": "rdma", 00:19:24.981 "traddr": "192.168.100.8", 00:19:24.981 "adrfam": "ipv4", 00:19:24.981 "trsvcid": "4420", 00:19:24.981 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:24.981 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:24.981 "hdgst": false, 00:19:24.981 "ddgst": false 00:19:24.981 }, 00:19:24.981 "method": "bdev_nvme_attach_controller" 00:19:24.981 },{ 00:19:24.981 "params": { 00:19:24.981 "name": "Nvme3", 00:19:24.981 "trtype": "rdma", 00:19:24.981 "traddr": "192.168.100.8", 00:19:24.981 "adrfam": "ipv4", 00:19:24.981 "trsvcid": "4420", 00:19:24.981 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:24.981 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:24.981 "hdgst": false, 00:19:24.981 "ddgst": false 00:19:24.981 }, 00:19:24.981 "method": "bdev_nvme_attach_controller" 00:19:24.981 },{ 00:19:24.981 "params": { 00:19:24.981 "name": "Nvme4", 00:19:24.981 "trtype": "rdma", 00:19:24.981 "traddr": "192.168.100.8", 00:19:24.981 "adrfam": "ipv4", 00:19:24.981 "trsvcid": "4420", 00:19:24.981 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:24.981 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:24.981 "hdgst": false, 00:19:24.981 "ddgst": false 00:19:24.981 }, 00:19:24.981 "method": "bdev_nvme_attach_controller" 00:19:24.981 },{ 00:19:24.981 "params": { 00:19:24.981 "name": "Nvme5", 00:19:24.981 "trtype": "rdma", 00:19:24.981 "traddr": "192.168.100.8", 00:19:24.981 "adrfam": "ipv4", 00:19:24.981 "trsvcid": "4420", 00:19:24.981 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:24.981 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:24.981 "hdgst": false, 00:19:24.981 "ddgst": false 00:19:24.981 }, 00:19:24.981 "method": "bdev_nvme_attach_controller" 00:19:24.981 },{ 00:19:24.981 "params": { 00:19:24.981 "name": "Nvme6", 00:19:24.981 "trtype": "rdma", 00:19:24.981 "traddr": "192.168.100.8", 00:19:24.981 "adrfam": "ipv4", 00:19:24.981 "trsvcid": "4420", 00:19:24.981 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:24.981 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:24.981 "hdgst": false, 00:19:24.981 "ddgst": false 00:19:24.981 }, 00:19:24.981 "method": "bdev_nvme_attach_controller" 00:19:24.981 },{ 00:19:24.981 "params": { 00:19:24.981 "name": "Nvme7", 00:19:24.981 "trtype": "rdma", 00:19:24.981 "traddr": "192.168.100.8", 00:19:24.981 "adrfam": "ipv4", 00:19:24.981 "trsvcid": "4420", 00:19:24.981 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:24.982 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:24.982 "hdgst": false, 00:19:24.982 "ddgst": false 00:19:24.982 }, 00:19:24.982 "method": "bdev_nvme_attach_controller" 00:19:24.982 },{ 00:19:24.982 "params": { 00:19:24.982 "name": "Nvme8", 00:19:24.982 "trtype": "rdma", 00:19:24.982 "traddr": "192.168.100.8", 00:19:24.982 "adrfam": "ipv4", 00:19:24.982 "trsvcid": "4420", 00:19:24.982 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:24.982 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:24.982 "hdgst": false, 00:19:24.982 "ddgst": false 00:19:24.982 }, 00:19:24.982 "method": "bdev_nvme_attach_controller" 00:19:24.982 },{ 00:19:24.982 "params": { 00:19:24.982 "name": "Nvme9", 00:19:24.982 "trtype": "rdma", 00:19:24.982 "traddr": "192.168.100.8", 00:19:24.982 "adrfam": "ipv4", 00:19:24.982 "trsvcid": "4420", 00:19:24.982 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:24.982 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:24.982 "hdgst": false, 00:19:24.982 "ddgst": false 00:19:24.982 }, 00:19:24.982 "method": "bdev_nvme_attach_controller" 00:19:24.982 },{ 00:19:24.982 "params": { 00:19:24.982 "name": "Nvme10", 00:19:24.982 "trtype": "rdma", 00:19:24.982 "traddr": "192.168.100.8", 00:19:24.982 "adrfam": "ipv4", 00:19:24.982 "trsvcid": "4420", 00:19:24.982 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:24.982 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:24.982 "hdgst": false, 00:19:24.982 "ddgst": false 00:19:24.982 }, 00:19:24.982 "method": "bdev_nvme_attach_controller" 00:19:24.982 }' 00:19:24.982 [2024-11-06 08:55:47.800640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.982 [2024-11-06 08:55:47.841605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.918 Running I/O for 10 seconds... 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.918 08:55:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.178 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.178 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:19:26.178 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:19:26.178 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:26.437 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:26.437 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:26.437 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:26.437 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=149 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 149 -ge 100 ']' 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 467619 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 467619 ']' 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 467619 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.438 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 467619 00:19:26.697 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:26.697 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:26.697 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 467619' 00:19:26.697 killing process with pid 467619 00:19:26.697 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 467619 00:19:26.697 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 467619 00:19:26.697 Received shutdown signal, test time was about 0.831546 seconds 00:19:26.697 00:19:26.697 Latency(us) 00:19:26.697 [2024-11-06T07:55:49.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.697 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme1n1 : 0.82 336.12 21.01 0.00 0.00 186890.70 7864.32 207717.91 00:19:26.697 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme2n1 : 0.82 363.78 22.74 0.00 0.00 169673.30 6179.11 200727.41 00:19:26.697 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme3n1 : 0.82 352.21 22.01 0.00 0.00 171712.12 8301.23 193736.90 00:19:26.697 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme4n1 : 0.82 390.72 24.42 0.00 0.00 151609.10 7552.24 134816.91 00:19:26.697 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme5n1 : 0.82 389.95 24.37 0.00 0.00 149218.06 9299.87 123332.51 00:19:26.697 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme6n1 : 0.82 389.27 24.33 0.00 0.00 146108.81 9924.02 113845.39 00:19:26.697 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme7n1 : 0.82 388.68 24.29 0.00 0.00 142903.78 10236.10 111848.11 00:19:26.697 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme8n1 : 0.83 387.84 24.24 0.00 0.00 141107.59 11109.91 103858.96 00:19:26.697 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme9n1 : 0.83 386.98 24.19 0.00 0.00 138415.93 12233.39 95869.81 00:19:26.697 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.697 Verification LBA range: start 0x0 length 0x400 00:19:26.697 Nvme10n1 : 0.83 308.10 19.26 0.00 0.00 169413.94 3198.78 213709.78 00:19:26.697 [2024-11-06T07:55:49.711Z] =================================================================================================================== 00:19:26.697 [2024-11-06T07:55:49.711Z] Total : 3693.65 230.85 0.00 0.00 155723.34 3198.78 213709.78 00:19:26.956 08:55:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:19:27.889 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 467520 00:19:27.889 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:19:27.889 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:27.889 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:27.889 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:27.889 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:27.889 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:27.889 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:19:27.889 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:27.890 rmmod nvme_rdma 00:19:27.890 rmmod nvme_fabrics 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 467520 ']' 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 467520 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 467520 ']' 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 467520 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.890 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 467520 00:19:28.149 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:28.149 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:28.149 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 467520' 00:19:28.149 killing process with pid 467520 00:19:28.149 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 467520 00:19:28.149 08:55:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 467520 00:19:28.408 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:28.408 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:28.408 00:19:28.408 real 0m4.908s 00:19:28.408 user 0m19.755s 00:19:28.408 sys 0m0.997s 00:19:28.408 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:28.408 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:28.408 ************************************ 00:19:28.408 END TEST nvmf_shutdown_tc2 00:19:28.408 ************************************ 00:19:28.408 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:28.408 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:28.408 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:28.408 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:28.668 ************************************ 00:19:28.668 START TEST nvmf_shutdown_tc3 00:19:28.668 ************************************ 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:28.668 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.668 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:28.668 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:28.669 Found net devices under 0000:da:00.0: mlx_0_0 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:28.669 Found net devices under 0000:da:00.1: mlx_0_1 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # rdma_device_init 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:28.669 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.669 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:28.669 altname enp218s0f0np0 00:19:28.669 altname ens818f0np0 00:19:28.669 inet 192.168.100.8/24 scope global mlx_0_0 00:19:28.669 valid_lft forever preferred_lft forever 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:28.669 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.669 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:28.669 altname enp218s0f1np1 00:19:28.669 altname ens818f1np1 00:19:28.669 inet 192.168.100.9/24 scope global mlx_0_1 00:19:28.669 valid_lft forever preferred_lft forever 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:28.669 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:28.670 192.168.100.9' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:28.670 192.168.100.9' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # head -n 1 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:28.670 192.168.100.9' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # tail -n +2 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # head -n 1 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=468390 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 468390 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 468390 ']' 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.670 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:28.930 [2024-11-06 08:55:51.719316] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:28.930 [2024-11-06 08:55:51.719358] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.930 [2024-11-06 08:55:51.793766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:28.930 [2024-11-06 08:55:51.835621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.930 [2024-11-06 08:55:51.835655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.930 [2024-11-06 08:55:51.835663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.930 [2024-11-06 08:55:51.835668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.930 [2024-11-06 08:55:51.835673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.930 [2024-11-06 08:55:51.837289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.930 [2024-11-06 08:55:51.837398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.930 [2024-11-06 08:55:51.837503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.930 [2024-11-06 08:55:51.837504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:28.930 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.930 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:28.930 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:28.930 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:28.930 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:29.189 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.189 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:29.189 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.189 08:55:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:29.189 [2024-11-06 08:55:51.994195] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d420a0/0x1d46590) succeed. 00:19:29.189 [2024-11-06 08:55:52.003120] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d43730/0x1d87c30) succeed. 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.190 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:29.449 Malloc1 00:19:29.449 [2024-11-06 08:55:52.229873] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:29.449 Malloc2 00:19:29.449 Malloc3 00:19:29.449 Malloc4 00:19:29.449 Malloc5 00:19:29.449 Malloc6 00:19:29.708 Malloc7 00:19:29.708 Malloc8 00:19:29.708 Malloc9 00:19:29.708 Malloc10 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=468661 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 468661 /var/tmp/bdevperf.sock 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 468661 ']' 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.708 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.708 { 00:19:29.708 "params": { 00:19:29.708 "name": "Nvme$subsystem", 00:19:29.708 "trtype": "$TEST_TRANSPORT", 00:19:29.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.708 "adrfam": "ipv4", 00:19:29.708 "trsvcid": "$NVMF_PORT", 00:19:29.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.708 "hdgst": ${hdgst:-false}, 00:19:29.709 "ddgst": ${ddgst:-false} 00:19:29.709 }, 00:19:29.709 "method": "bdev_nvme_attach_controller" 00:19:29.709 } 00:19:29.709 EOF 00:19:29.709 )") 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.709 { 00:19:29.709 "params": { 00:19:29.709 "name": "Nvme$subsystem", 00:19:29.709 "trtype": "$TEST_TRANSPORT", 00:19:29.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.709 "adrfam": "ipv4", 00:19:29.709 "trsvcid": "$NVMF_PORT", 00:19:29.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.709 "hdgst": ${hdgst:-false}, 00:19:29.709 "ddgst": ${ddgst:-false} 00:19:29.709 }, 00:19:29.709 "method": "bdev_nvme_attach_controller" 00:19:29.709 } 00:19:29.709 EOF 00:19:29.709 )") 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.709 { 00:19:29.709 "params": { 00:19:29.709 "name": "Nvme$subsystem", 00:19:29.709 "trtype": "$TEST_TRANSPORT", 00:19:29.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.709 "adrfam": "ipv4", 00:19:29.709 "trsvcid": "$NVMF_PORT", 00:19:29.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.709 "hdgst": ${hdgst:-false}, 00:19:29.709 "ddgst": ${ddgst:-false} 00:19:29.709 }, 00:19:29.709 "method": "bdev_nvme_attach_controller" 00:19:29.709 } 00:19:29.709 EOF 00:19:29.709 )") 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.709 { 00:19:29.709 "params": { 00:19:29.709 "name": "Nvme$subsystem", 00:19:29.709 "trtype": "$TEST_TRANSPORT", 00:19:29.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.709 "adrfam": "ipv4", 00:19:29.709 "trsvcid": "$NVMF_PORT", 00:19:29.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.709 "hdgst": ${hdgst:-false}, 00:19:29.709 "ddgst": ${ddgst:-false} 00:19:29.709 }, 00:19:29.709 "method": "bdev_nvme_attach_controller" 00:19:29.709 } 00:19:29.709 EOF 00:19:29.709 )") 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.709 { 00:19:29.709 "params": { 00:19:29.709 "name": "Nvme$subsystem", 00:19:29.709 "trtype": "$TEST_TRANSPORT", 00:19:29.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.709 "adrfam": "ipv4", 00:19:29.709 "trsvcid": "$NVMF_PORT", 00:19:29.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.709 "hdgst": ${hdgst:-false}, 00:19:29.709 "ddgst": ${ddgst:-false} 00:19:29.709 }, 00:19:29.709 "method": "bdev_nvme_attach_controller" 00:19:29.709 } 00:19:29.709 EOF 00:19:29.709 )") 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.709 { 00:19:29.709 "params": { 00:19:29.709 "name": "Nvme$subsystem", 00:19:29.709 "trtype": "$TEST_TRANSPORT", 00:19:29.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.709 "adrfam": "ipv4", 00:19:29.709 "trsvcid": "$NVMF_PORT", 00:19:29.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.709 "hdgst": ${hdgst:-false}, 00:19:29.709 "ddgst": ${ddgst:-false} 00:19:29.709 }, 00:19:29.709 "method": "bdev_nvme_attach_controller" 00:19:29.709 } 00:19:29.709 EOF 00:19:29.709 )") 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.709 { 00:19:29.709 "params": { 00:19:29.709 "name": "Nvme$subsystem", 00:19:29.709 "trtype": "$TEST_TRANSPORT", 00:19:29.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.709 "adrfam": "ipv4", 00:19:29.709 "trsvcid": "$NVMF_PORT", 00:19:29.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.709 "hdgst": ${hdgst:-false}, 00:19:29.709 "ddgst": ${ddgst:-false} 00:19:29.709 }, 00:19:29.709 "method": "bdev_nvme_attach_controller" 00:19:29.709 } 00:19:29.709 EOF 00:19:29.709 )") 00:19:29.709 [2024-11-06 08:55:52.706921] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:29.709 [2024-11-06 08:55:52.706969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468661 ] 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.709 { 00:19:29.709 "params": { 00:19:29.709 "name": "Nvme$subsystem", 00:19:29.709 "trtype": "$TEST_TRANSPORT", 00:19:29.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.709 "adrfam": "ipv4", 00:19:29.709 "trsvcid": "$NVMF_PORT", 00:19:29.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.709 "hdgst": ${hdgst:-false}, 00:19:29.709 "ddgst": ${ddgst:-false} 00:19:29.709 }, 00:19:29.709 "method": "bdev_nvme_attach_controller" 00:19:29.709 } 00:19:29.709 EOF 00:19:29.709 )") 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.709 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.709 { 00:19:29.709 "params": { 00:19:29.709 "name": "Nvme$subsystem", 00:19:29.709 "trtype": "$TEST_TRANSPORT", 00:19:29.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.709 "adrfam": "ipv4", 00:19:29.709 "trsvcid": "$NVMF_PORT", 00:19:29.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.709 "hdgst": ${hdgst:-false}, 00:19:29.709 "ddgst": ${ddgst:-false} 00:19:29.709 }, 00:19:29.709 "method": "bdev_nvme_attach_controller" 00:19:29.709 } 00:19:29.709 EOF 00:19:29.709 )") 00:19:29.969 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.969 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:29.969 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:29.969 { 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme$subsystem", 00:19:29.969 "trtype": "$TEST_TRANSPORT", 00:19:29.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "$NVMF_PORT", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.969 "hdgst": ${hdgst:-false}, 00:19:29.969 "ddgst": ${ddgst:-false} 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 } 00:19:29.969 EOF 00:19:29.969 )") 00:19:29.969 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:19:29.969 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:19:29.969 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:19:29.969 08:55:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme1", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.969 "hdgst": false, 00:19:29.969 "ddgst": false 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 },{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme2", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:29.969 "hdgst": false, 00:19:29.969 "ddgst": false 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 },{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme3", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:29.969 "hdgst": false, 00:19:29.969 "ddgst": false 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 },{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme4", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:29.969 "hdgst": false, 00:19:29.969 "ddgst": false 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 },{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme5", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:29.969 "hdgst": false, 00:19:29.969 "ddgst": false 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 },{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme6", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:29.969 "hdgst": false, 00:19:29.969 "ddgst": false 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 },{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme7", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:29.969 "hdgst": false, 00:19:29.969 "ddgst": false 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 },{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme8", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:29.969 "hdgst": false, 00:19:29.969 "ddgst": false 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 },{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme9", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:29.969 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:29.969 "hdgst": false, 00:19:29.969 "ddgst": false 00:19:29.969 }, 00:19:29.969 "method": "bdev_nvme_attach_controller" 00:19:29.969 },{ 00:19:29.969 "params": { 00:19:29.969 "name": "Nvme10", 00:19:29.969 "trtype": "rdma", 00:19:29.969 "traddr": "192.168.100.8", 00:19:29.969 "adrfam": "ipv4", 00:19:29.969 "trsvcid": "4420", 00:19:29.969 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:29.970 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:29.970 "hdgst": false, 00:19:29.970 "ddgst": false 00:19:29.970 }, 00:19:29.970 "method": "bdev_nvme_attach_controller" 00:19:29.970 }' 00:19:29.970 [2024-11-06 08:55:52.785507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.970 [2024-11-06 08:55:52.826570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.907 Running I/O for 10 seconds... 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.907 08:55:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:31.165 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.165 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=19 00:19:31.165 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 19 -ge 100 ']' 00:19:31.166 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:31.424 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:31.424 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:31.424 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:31.424 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:31.424 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.424 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:31.424 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.425 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=178 00:19:31.425 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 178 -ge 100 ']' 00:19:31.425 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:19:31.425 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:19:31.425 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:19:31.425 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 468390 00:19:31.425 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 468390 ']' 00:19:31.425 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 468390 00:19:31.683 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:19:31.683 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.683 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 468390 00:19:31.683 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:31.683 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:31.683 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 468390' 00:19:31.683 killing process with pid 468390 00:19:31.683 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 468390 00:19:31.683 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 468390 00:19:32.200 2750.00 IOPS, 171.88 MiB/s [2024-11-06T07:55:55.214Z] 08:55:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:19:32.772 [2024-11-06 08:55:55.556023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bdff80 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.772 [2024-11-06 08:55:55.556142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bcff00 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.772 [2024-11-06 08:55:55.556214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bbfe80 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.772 [2024-11-06 08:55:55.556284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bafe00 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.772 [2024-11-06 08:55:55.556340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b9fd80 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.772 [2024-11-06 08:55:55.556397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b8fd00 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.772 [2024-11-06 08:55:55.556453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b7fc80 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.772 [2024-11-06 08:55:55.556509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b6fc00 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.772 [2024-11-06 08:55:55.556564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b5fb80 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.772 [2024-11-06 08:55:55.556620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b4fb00 len:0x10000 key:0x184d00 00:19:32.772 [2024-11-06 08:55:55.556640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.556675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b3fa80 len:0x10000 key:0x184d00 00:19:32.773 [2024-11-06 08:55:55.556695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.556730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b2fa00 len:0x10000 key:0x184d00 00:19:32.773 [2024-11-06 08:55:55.556751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.556785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b1f980 len:0x10000 key:0x184d00 00:19:32.773 [2024-11-06 08:55:55.556806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.556847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b0f900 len:0x10000 key:0x184d00 00:19:32.773 [2024-11-06 08:55:55.556861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.556884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aff880 len:0x10000 key:0x184d00 00:19:32.773 [2024-11-06 08:55:55.556901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.556924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100270f900 len:0x10000 key:0x184c00 00:19:32.773 [2024-11-06 08:55:55.556938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.559813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcff00 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.559854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.559887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfe80 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.559908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.559935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafe00 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.559957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.559984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fd80 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fd00 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fc80 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6fc00 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5fb80 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4fb00 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3fa80 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2fa00 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f980 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f900 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff880 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef800 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf780 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf700 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf680 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf600 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f580 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8f500 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7f480 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6f400 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5f380 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4f300 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.560975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3f280 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.560991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.561009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2f200 len:0x10000 key:0x183f00 00:19:32.773 [2024-11-06 08:55:55.561023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.773 [2024-11-06 08:55:55.561041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1f180 len:0x10000 key:0x183f00 00:19:32.774 [2024-11-06 08:55:55.561054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0f100 len:0x10000 key:0x183f00 00:19:32.774 [2024-11-06 08:55:55.561086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ff0000 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff80 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcff00 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfe80 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafe00 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fd80 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fd00 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fc80 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6fc00 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5fb80 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4fb00 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3fa80 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2fa00 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f980 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f900 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff880 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef800 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf780 len:0x10000 key:0x184200 00:19:32.774 [2024-11-06 08:55:55.561676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef800 len:0x10000 key:0x184d00 00:19:32.774 [2024-11-06 08:55:55.561708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f370000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.561743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f391000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.561778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3b2000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.561810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3d3000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.561842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3f4000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.561875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f415000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.561906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f436000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.561943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f457000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.561976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.561994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f478000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.562008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.562025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f499000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.562040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.562058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4ba000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.562072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.562090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4db000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.562104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.562122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4fc000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.562136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.562154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f51d000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.562168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.562186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f53e000 len:0x10000 key:0x184700 00:19:32.774 [2024-11-06 08:55:55.562200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.774 [2024-11-06 08:55:55.562224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f55f000 len:0x10000 key:0x184700 00:19:32.775 [2024-11-06 08:55:55.562238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.564971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.564990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x184800 00:19:32.775 [2024-11-06 08:55:55.565407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x184b00 00:19:32.775 [2024-11-06 08:55:55.565438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x184b00 00:19:32.775 [2024-11-06 08:55:55.565471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x184b00 00:19:32.775 [2024-11-06 08:55:55.565503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x184b00 00:19:32.775 [2024-11-06 08:55:55.565534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x184b00 00:19:32.775 [2024-11-06 08:55:55.565566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.775 [2024-11-06 08:55:55.565584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x184b00 00:19:32.775 [2024-11-06 08:55:55.565598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.565972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.565985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.566018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x184b00 00:19:32.776 [2024-11-06 08:55:55.566052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x184200 00:19:32.776 [2024-11-06 08:55:55.566085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f790000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f7b1000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f7d2000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f7f3000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f814000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f835000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f856000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f877000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f898000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8b9000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8da000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8fb000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f91c000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f93d000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f95e000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.566597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f97f000 len:0x10000 key:0x184700 00:19:32.776 [2024-11-06 08:55:55.566611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:85534000 sqhd:7210 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.569349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.776 [2024-11-06 08:55:55.569382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:2ad9ac0 sqhd:8f50 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.569400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.776 [2024-11-06 08:55:55.569415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:2ad9ac0 sqhd:8f50 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.569430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.776 [2024-11-06 08:55:55.569444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:2ad9ac0 sqhd:8f50 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.569459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.776 [2024-11-06 08:55:55.569474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:2ad9ac0 sqhd:8f50 p:0 m:0 dnr:0 00:19:32.776 [2024-11-06 08:55:55.571383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.776 [2024-11-06 08:55:55.571408] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:19:32.776 [2024-11-06 08:55:55.571423] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:19:32.777 [2024-11-06 08:55:55.571454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.571470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.571485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.571499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.571514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.571529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.571544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.571559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.573798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.777 [2024-11-06 08:55:55.573820] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:32.777 [2024-11-06 08:55:55.573833] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:19:32.777 [2024-11-06 08:55:55.573856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.573871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.573887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.573902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.573917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.573931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.573946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.573960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.576152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.777 [2024-11-06 08:55:55.576183] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:32.777 [2024-11-06 08:55:55.576211] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:19:32.777 [2024-11-06 08:55:55.576250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.576273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.576296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.576324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.576347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.576368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.576391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.576412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.578673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.777 [2024-11-06 08:55:55.578713] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:32.777 [2024-11-06 08:55:55.578728] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:19:32.777 [2024-11-06 08:55:55.578750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.578764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.578780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.578793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.578809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.578823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.578838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.578852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.581014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.777 [2024-11-06 08:55:55.581044] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:32.777 [2024-11-06 08:55:55.581065] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:19:32.777 [2024-11-06 08:55:55.581101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.581123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.581148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.581168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.581191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.581248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.581272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.581306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.583309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.777 [2024-11-06 08:55:55.583340] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:32.777 [2024-11-06 08:55:55.583361] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:19:32.777 [2024-11-06 08:55:55.583398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.583420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.583445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.583465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.583487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.583509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.777 [2024-11-06 08:55:55.583531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.777 [2024-11-06 08:55:55.583552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.585982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.778 [2024-11-06 08:55:55.586014] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:32.778 [2024-11-06 08:55:55.586035] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.586068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.586090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.586112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.586133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.586156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.586176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.586198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.586233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.588301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.778 [2024-11-06 08:55:55.588333] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:32.778 [2024-11-06 08:55:55.588360] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.588397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.588420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.588443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.588464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.588486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.588507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.588530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.588551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.590544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.778 [2024-11-06 08:55:55.590577] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:32.778 [2024-11-06 08:55:55.590598] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.590636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.590659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.590687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.590701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.590716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.590730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.590745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.778 [2024-11-06 08:55:55.590759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32732 cdw0:1 sqhd:2990 p:0 m:0 dnr:0 00:19:32.778 [2024-11-06 08:55:55.615400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.778 [2024-11-06 08:55:55.615447] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:32.778 [2024-11-06 08:55:55.615470] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.624224] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:32.778 [2024-11-06 08:55:55.624259] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:19:32.778 [2024-11-06 08:55:55.624271] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:19:32.778 [2024-11-06 08:55:55.624317] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.624329] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.624340] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.624349] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.624360] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.624370] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.624379] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:19:32.778 [2024-11-06 08:55:55.624462] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:19:32.778 [2024-11-06 08:55:55.624484] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:19:32.778 [2024-11-06 08:55:55.624494] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:19:32.778 [2024-11-06 08:55:55.624506] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:19:32.778 [2024-11-06 08:55:55.626747] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:19:32.778 task offset: 40960 on job bdev=Nvme7n1 fails 00:19:32.778 00:19:32.778 Latency(us) 00:19:32.778 [2024-11-06T07:55:55.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.778 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme1n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme1n1 : 1.93 141.15 8.82 33.21 0.00 365201.77 6116.69 1070546.16 00:19:32.778 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme2n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme2n1 : 1.93 141.09 8.82 33.20 0.00 362326.03 9861.61 1070546.16 00:19:32.778 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme3n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme3n1 : 1.93 149.32 9.33 33.18 0.00 343059.50 13044.78 1070546.16 00:19:32.778 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme4n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme4n1 : 1.93 152.88 9.56 33.17 0.00 333650.35 4337.86 1070546.16 00:19:32.778 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme5n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme5n1 : 1.93 140.90 8.81 33.15 0.00 353744.86 28711.01 1070546.16 00:19:32.778 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme6n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme6n1 : 1.93 140.84 8.80 33.14 0.00 351028.87 31082.79 1070546.16 00:19:32.778 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme7n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme7n1 : 1.93 148.03 9.25 33.13 0.00 333896.56 38947.11 1070546.16 00:19:32.778 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme8n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme8n1 : 1.93 145.38 9.09 33.11 0.00 332820.84 43191.34 1062557.01 00:19:32.778 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme9n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme9n1 : 1.93 132.39 8.27 33.10 0.00 359575.75 43690.67 1118481.07 00:19:32.778 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.778 Job: Nvme10n1 ended in about 1.93 seconds with error 00:19:32.778 Verification LBA range: start 0x0 length 0x400 00:19:32.778 Nvme10n1 : 1.93 132.33 8.27 33.08 0.00 356464.74 27213.04 1102502.77 00:19:32.778 [2024-11-06T07:55:55.792Z] =================================================================================================================== 00:19:32.778 [2024-11-06T07:55:55.792Z] Total : 1424.31 89.02 331.47 0.00 348852.71 4337.86 1118481.07 00:19:32.778 [2024-11-06 08:55:55.654172] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:32.778 [2024-11-06 08:55:55.654193] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:19:32.778 [2024-11-06 08:55:55.654211] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:19:32.779 [2024-11-06 08:55:55.665774] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.665824] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.665844] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed000 00:19:32.779 [2024-11-06 08:55:55.665944] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.665962] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.665973] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e5280 00:19:32.779 [2024-11-06 08:55:55.666071] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.666087] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.666098] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ba2c0 00:19:32.779 [2024-11-06 08:55:55.670572] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.670616] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.670636] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bf4c0 00:19:32.779 [2024-11-06 08:55:55.670807] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.670833] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.670850] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bf180 00:19:32.779 [2024-11-06 08:55:55.670956] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.670983] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.670999] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d20c0 00:19:32.779 [2024-11-06 08:55:55.671128] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.671145] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.671157] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170b9ac0 00:19:32.779 [2024-11-06 08:55:55.671931] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.671954] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.671967] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001709a640 00:19:32.779 [2024-11-06 08:55:55.672052] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.672069] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.672079] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001709a040 00:19:32.779 [2024-11-06 08:55:55.672167] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:32.779 [2024-11-06 08:55:55.672183] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:32.779 [2024-11-06 08:55:55.672194] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017089000 00:19:33.038 08:55:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 468661 00:19:33.038 08:55:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:19:33.038 08:55:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 468661 00:19:33.038 08:55:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:19:33.038 08:55:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.038 08:55:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:19:33.038 08:55:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.038 08:55:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 468661 00:19:33.976 [2024-11-06 08:55:56.670093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.670144] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.671831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.671864] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.673174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.673219] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.674591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.674602] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.676045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.676076] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.677312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.677344] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.678749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.678780] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.680181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.680221] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.681530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.681561] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.683238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:33.976 [2024-11-06 08:55:56.683270] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:33.976 [2024-11-06 08:55:56.683289] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:19:33.976 [2024-11-06 08:55:56.683309] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:19:33.976 [2024-11-06 08:55:56.683329] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:19:33.976 [2024-11-06 08:55:56.683367] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:19:33.976 [2024-11-06 08:55:56.683374] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:19:33.976 [2024-11-06 08:55:56.683380] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:19:33.976 [2024-11-06 08:55:56.683389] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:19:33.976 [2024-11-06 08:55:56.683396] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:19:33.976 [2024-11-06 08:55:56.683402] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:19:33.976 [2024-11-06 08:55:56.683412] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:19:33.976 [2024-11-06 08:55:56.683418] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:19:33.976 [2024-11-06 08:55:56.683424] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:19:33.976 [2024-11-06 08:55:56.683434] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:33.976 [2024-11-06 08:55:56.683440] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:33.976 [2024-11-06 08:55:56.683447] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:33.976 [2024-11-06 08:55:56.683457] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:19:33.976 [2024-11-06 08:55:56.683467] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:19:33.976 [2024-11-06 08:55:56.683474] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:19:33.976 [2024-11-06 08:55:56.683483] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:19:33.976 [2024-11-06 08:55:56.683490] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:19:33.977 [2024-11-06 08:55:56.683496] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:19:33.977 [2024-11-06 08:55:56.683559] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:19:33.977 [2024-11-06 08:55:56.683572] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:19:33.977 [2024-11-06 08:55:56.683581] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:19:33.977 [2024-11-06 08:55:56.683590] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:19:33.977 [2024-11-06 08:55:56.683599] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:33.977 [2024-11-06 08:55:56.683608] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:19:33.977 [2024-11-06 08:55:56.683617] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:19:33.977 [2024-11-06 08:55:56.683625] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:19:33.977 [2024-11-06 08:55:56.683632] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:19:33.977 [2024-11-06 08:55:56.683638] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:19:33.977 [2024-11-06 08:55:56.683648] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:19:33.977 [2024-11-06 08:55:56.683654] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:19:33.977 [2024-11-06 08:55:56.683661] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:19:33.977 [2024-11-06 08:55:56.683670] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:19:33.977 [2024-11-06 08:55:56.683677] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:19:33.977 [2024-11-06 08:55:56.683684] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:19:33.977 [2024-11-06 08:55:56.683754] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:19:33.977 [2024-11-06 08:55:56.683776] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:19:33.977 [2024-11-06 08:55:56.683785] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:33.977 rmmod nvme_rdma 00:19:33.977 rmmod nvme_fabrics 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 468390 ']' 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 468390 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 468390 ']' 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 468390 00:19:33.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (468390) - No such process 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 468390 is not found' 00:19:33.977 Process with pid 468390 is not found 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:33.977 00:19:33.977 real 0m5.460s 00:19:33.977 user 0m16.258s 00:19:33.977 sys 0m1.157s 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:33.977 ************************************ 00:19:33.977 END TEST nvmf_shutdown_tc3 00:19:33.977 ************************************ 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:33.977 ************************************ 00:19:33.977 START TEST nvmf_shutdown_tc4 00:19:33.977 ************************************ 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:33.977 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:34.238 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:34.238 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:34.238 08:55:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:34.238 Found net devices under 0000:da:00.0: mlx_0_0 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:34.238 Found net devices under 0000:da:00.1: mlx_0_1 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # rdma_device_init 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:34.238 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:34.239 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:34.239 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:34.239 altname enp218s0f0np0 00:19:34.239 altname ens818f0np0 00:19:34.239 inet 192.168.100.8/24 scope global mlx_0_0 00:19:34.239 valid_lft forever preferred_lft forever 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:34.239 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:34.239 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:34.239 altname enp218s0f1np1 00:19:34.239 altname ens818f1np1 00:19:34.239 inet 192.168.100.9/24 scope global mlx_0_1 00:19:34.239 valid_lft forever preferred_lft forever 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:34.239 192.168.100.9' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:34.239 192.168.100.9' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # head -n 1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:34.239 192.168.100.9' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # tail -n +2 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # head -n 1 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=469470 00:19:34.239 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 469470 00:19:34.240 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:34.240 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 469470 ']' 00:19:34.240 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.240 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.240 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.240 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.240 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:34.499 [2024-11-06 08:55:57.252394] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:34.499 [2024-11-06 08:55:57.252436] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.499 [2024-11-06 08:55:57.326525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.499 [2024-11-06 08:55:57.368512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.499 [2024-11-06 08:55:57.368545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.499 [2024-11-06 08:55:57.368552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.499 [2024-11-06 08:55:57.368558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.499 [2024-11-06 08:55:57.368563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.499 [2024-11-06 08:55:57.370200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.499 [2024-11-06 08:55:57.370342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.499 [2024-11-06 08:55:57.370449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.499 [2024-11-06 08:55:57.370450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:34.499 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.499 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:19:34.499 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:34.499 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.499 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:34.499 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.499 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:34.499 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.499 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:34.758 [2024-11-06 08:55:57.528300] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbe50a0/0xbe9590) succeed. 00:19:34.758 [2024-11-06 08:55:57.537266] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbe6730/0xc2ac30) succeed. 00:19:34.758 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.758 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:34.758 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:34.758 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.758 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:34.758 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.759 08:55:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:34.759 Malloc1 00:19:34.759 [2024-11-06 08:55:57.765489] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:35.055 Malloc2 00:19:35.055 Malloc3 00:19:35.055 Malloc4 00:19:35.055 Malloc5 00:19:35.055 Malloc6 00:19:35.055 Malloc7 00:19:35.055 Malloc8 00:19:35.314 Malloc9 00:19:35.314 Malloc10 00:19:35.314 08:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.314 08:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:35.314 08:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.314 08:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 08:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=469741 00:19:35.314 08:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:19:35.314 08:55:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:19:35.314 [2024-11-06 08:55:58.298906] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:40.594 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.594 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 469470 00:19:40.594 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 469470 ']' 00:19:40.594 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 469470 00:19:40.594 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:19:40.594 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.594 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 469470 00:19:40.595 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:40.595 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:40.595 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 469470' 00:19:40.595 killing process with pid 469470 00:19:40.595 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 469470 00:19:40.595 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 469470 00:19:40.595 NVMe io qpair process completion error 00:19:40.595 NVMe io qpair process completion error 00:19:40.595 NVMe io qpair process completion error 00:19:40.595 NVMe io qpair process completion error 00:19:40.595 starting I/O failed: -6 00:19:40.595 NVMe io qpair process completion error 00:19:40.595 NVMe io qpair process completion error 00:19:40.854 08:56:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 starting I/O failed: -6 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.425 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 [2024-11-06 08:56:04.372154] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 [2024-11-06 08:56:04.384913] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.426 Write completed with error (sct=0, sc=8) 00:19:41.426 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 [2024-11-06 08:56:04.398837] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 starting I/O failed: -6 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.427 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 [2024-11-06 08:56:04.411754] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 starting I/O failed: -6 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.428 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 [2024-11-06 08:56:04.424931] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 starting I/O failed: -6 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.429 Write completed with error (sct=0, sc=8) 00:19:41.688 [2024-11-06 08:56:04.437905] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:19:41.688 NVMe io qpair process completion error 00:19:41.688 NVMe io qpair process completion error 00:19:41.688 NVMe io qpair process completion error 00:19:41.688 NVMe io qpair process completion error 00:19:41.947 08:56:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 469741 00:19:41.947 08:56:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:19:41.947 08:56:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 469741 00:19:41.947 08:56:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:19:41.947 08:56:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.947 08:56:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:19:41.947 08:56:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.947 08:56:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 469741 00:19:42.517 [2024-11-06 08:56:05.442680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.517 [2024-11-06 08:56:05.442735] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 [2024-11-06 08:56:05.444885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.517 [2024-11-06 08:56:05.444921] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 [2024-11-06 08:56:05.446591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.517 [2024-11-06 08:56:05.446625] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 [2024-11-06 08:56:05.449177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 [2024-11-06 08:56:05.449225] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 [2024-11-06 08:56:05.451631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.517 [2024-11-06 08:56:05.451664] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 [2024-11-06 08:56:05.453768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.517 [2024-11-06 08:56:05.453801] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 [2024-11-06 08:56:05.456187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.517 [2024-11-06 08:56:05.456229] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.517 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 [2024-11-06 08:56:05.458824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 [2024-11-06 08:56:05.458856] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 [2024-11-06 08:56:05.461314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.518 [2024-11-06 08:56:05.461345] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 [2024-11-06 08:56:05.463424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:42.518 [2024-11-06 08:56:05.463463] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.518 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Write completed with error (sct=0, sc=8) 00:19:42.519 Initializing NVMe Controllers 00:19:42.519 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:19:42.519 Controller IO queue size 128, less than required. 00:19:42.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.519 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.519 Controller IO queue size 128, less than required. 00:19:42.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.519 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:19:42.519 Controller IO queue size 128, less than required. 00:19:42.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.519 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:19:42.519 Controller IO queue size 128, less than required. 00:19:42.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.519 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:19:42.519 Controller IO queue size 128, less than required. 00:19:42.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.519 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:19:42.519 Controller IO queue size 128, less than required. 00:19:42.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.519 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:19:42.519 Controller IO queue size 128, less than required. 00:19:42.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.520 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:19:42.520 Controller IO queue size 128, less than required. 00:19:42.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.520 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:19:42.520 Controller IO queue size 128, less than required. 00:19:42.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.520 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:19:42.520 Controller IO queue size 128, less than required. 00:19:42.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:19:42.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:19:42.520 Initialization complete. Launching workers. 00:19:42.520 ======================================================== 00:19:42.520 Latency(us) 00:19:42.520 Device Information : IOPS MiB/s Average min max 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1466.31 63.01 101973.00 115.03 2217574.23 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1431.14 61.49 88249.26 120.83 1205699.89 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1442.36 61.98 87669.99 122.30 1206395.44 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1444.87 62.08 87650.60 120.25 1219735.82 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1458.94 62.69 102433.57 117.72 2268663.39 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1438.84 61.83 88095.99 106.80 1232929.96 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1443.20 62.01 87838.79 121.21 1231921.26 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1421.76 61.09 89429.65 121.93 1265194.44 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1457.27 62.62 102497.09 114.17 2218588.04 00:19:42.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1479.55 63.57 101058.17 113.35 2089751.35 00:19:42.520 ======================================================== 00:19:42.520 Total : 14484.25 622.37 93751.55 106.80 2268663.39 00:19:42.520 00:19:42.520 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:42.780 rmmod nvme_rdma 00:19:42.780 rmmod nvme_fabrics 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 469470 ']' 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 469470 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 469470 ']' 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 469470 00:19:42.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (469470) - No such process 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 469470 is not found' 00:19:42.780 Process with pid 469470 is not found 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:42.780 00:19:42.780 real 0m8.611s 00:19:42.780 user 0m32.139s 00:19:42.780 sys 0m1.082s 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:42.780 ************************************ 00:19:42.780 END TEST nvmf_shutdown_tc4 00:19:42.780 ************************************ 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:19:42.780 00:19:42.780 real 0m31.494s 00:19:42.780 user 1m36.538s 00:19:42.780 sys 0m8.856s 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:42.780 ************************************ 00:19:42.780 END TEST nvmf_shutdown 00:19:42.780 ************************************ 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:42.780 00:19:42.780 real 7m9.525s 00:19:42.780 user 17m35.294s 00:19:42.780 sys 1m50.182s 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.780 08:56:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:42.780 ************************************ 00:19:42.780 END TEST nvmf_target_extra 00:19:42.780 ************************************ 00:19:42.780 08:56:05 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:19:42.780 08:56:05 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:42.780 08:56:05 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:42.780 08:56:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:42.780 ************************************ 00:19:42.780 START TEST nvmf_host 00:19:42.780 ************************************ 00:19:42.780 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:19:43.040 * Looking for test storage... 00:19:43.040 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1689 -- # lcov --version 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:19:43.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.040 --rc genhtml_branch_coverage=1 00:19:43.040 --rc genhtml_function_coverage=1 00:19:43.040 --rc genhtml_legend=1 00:19:43.040 --rc geninfo_all_blocks=1 00:19:43.040 --rc geninfo_unexecuted_blocks=1 00:19:43.040 00:19:43.040 ' 00:19:43.040 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:19:43.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.040 --rc genhtml_branch_coverage=1 00:19:43.040 --rc genhtml_function_coverage=1 00:19:43.040 --rc genhtml_legend=1 00:19:43.041 --rc geninfo_all_blocks=1 00:19:43.041 --rc geninfo_unexecuted_blocks=1 00:19:43.041 00:19:43.041 ' 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:19:43.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.041 --rc genhtml_branch_coverage=1 00:19:43.041 --rc genhtml_function_coverage=1 00:19:43.041 --rc genhtml_legend=1 00:19:43.041 --rc geninfo_all_blocks=1 00:19:43.041 --rc geninfo_unexecuted_blocks=1 00:19:43.041 00:19:43.041 ' 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:19:43.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.041 --rc genhtml_branch_coverage=1 00:19:43.041 --rc genhtml_function_coverage=1 00:19:43.041 --rc genhtml_legend=1 00:19:43.041 --rc geninfo_all_blocks=1 00:19:43.041 --rc geninfo_unexecuted_blocks=1 00:19:43.041 00:19:43.041 ' 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.041 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.041 ************************************ 00:19:43.041 START TEST nvmf_multicontroller 00:19:43.041 ************************************ 00:19:43.041 08:56:05 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:43.301 * Looking for test storage... 00:19:43.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # lcov --version 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.301 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:19:43.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.302 --rc genhtml_branch_coverage=1 00:19:43.302 --rc genhtml_function_coverage=1 00:19:43.302 --rc genhtml_legend=1 00:19:43.302 --rc geninfo_all_blocks=1 00:19:43.302 --rc geninfo_unexecuted_blocks=1 00:19:43.302 00:19:43.302 ' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:19:43.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.302 --rc genhtml_branch_coverage=1 00:19:43.302 --rc genhtml_function_coverage=1 00:19:43.302 --rc genhtml_legend=1 00:19:43.302 --rc geninfo_all_blocks=1 00:19:43.302 --rc geninfo_unexecuted_blocks=1 00:19:43.302 00:19:43.302 ' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:19:43.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.302 --rc genhtml_branch_coverage=1 00:19:43.302 --rc genhtml_function_coverage=1 00:19:43.302 --rc genhtml_legend=1 00:19:43.302 --rc geninfo_all_blocks=1 00:19:43.302 --rc geninfo_unexecuted_blocks=1 00:19:43.302 00:19:43.302 ' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:19:43.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.302 --rc genhtml_branch_coverage=1 00:19:43.302 --rc genhtml_function_coverage=1 00:19:43.302 --rc genhtml_legend=1 00:19:43.302 --rc geninfo_all_blocks=1 00:19:43.302 --rc geninfo_unexecuted_blocks=1 00:19:43.302 00:19:43.302 ' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.302 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:43.302 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:19:43.302 00:19:43.302 real 0m0.210s 00:19:43.302 user 0m0.122s 00:19:43.302 sys 0m0.103s 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.302 ************************************ 00:19:43.302 END TEST nvmf_multicontroller 00:19:43.302 ************************************ 00:19:43.302 08:56:06 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:43.303 08:56:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:43.303 08:56:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:43.303 08:56:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.303 ************************************ 00:19:43.303 START TEST nvmf_aer 00:19:43.303 ************************************ 00:19:43.303 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:43.562 * Looking for test storage... 00:19:43.562 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:43.562 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:19:43.562 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # lcov --version 00:19:43.562 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:19:43.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.563 --rc genhtml_branch_coverage=1 00:19:43.563 --rc genhtml_function_coverage=1 00:19:43.563 --rc genhtml_legend=1 00:19:43.563 --rc geninfo_all_blocks=1 00:19:43.563 --rc geninfo_unexecuted_blocks=1 00:19:43.563 00:19:43.563 ' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:19:43.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.563 --rc genhtml_branch_coverage=1 00:19:43.563 --rc genhtml_function_coverage=1 00:19:43.563 --rc genhtml_legend=1 00:19:43.563 --rc geninfo_all_blocks=1 00:19:43.563 --rc geninfo_unexecuted_blocks=1 00:19:43.563 00:19:43.563 ' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:19:43.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.563 --rc genhtml_branch_coverage=1 00:19:43.563 --rc genhtml_function_coverage=1 00:19:43.563 --rc genhtml_legend=1 00:19:43.563 --rc geninfo_all_blocks=1 00:19:43.563 --rc geninfo_unexecuted_blocks=1 00:19:43.563 00:19:43.563 ' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:19:43.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.563 --rc genhtml_branch_coverage=1 00:19:43.563 --rc genhtml_function_coverage=1 00:19:43.563 --rc genhtml_legend=1 00:19:43.563 --rc geninfo_all_blocks=1 00:19:43.563 --rc geninfo_unexecuted_blocks=1 00:19:43.563 00:19:43.563 ' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:19:43.563 08:56:06 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:50.137 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:50.137 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:50.137 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:50.138 Found net devices under 0000:da:00.0: mlx_0_0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:50.138 Found net devices under 0000:da:00.1: mlx_0_1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # rdma_device_init 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:50.138 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:50.138 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:50.138 altname enp218s0f0np0 00:19:50.138 altname ens818f0np0 00:19:50.138 inet 192.168.100.8/24 scope global mlx_0_0 00:19:50.138 valid_lft forever preferred_lft forever 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:50.138 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:50.138 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:50.138 altname enp218s0f1np1 00:19:50.138 altname ens818f1np1 00:19:50.138 inet 192.168.100.9/24 scope global mlx_0_1 00:19:50.138 valid_lft forever preferred_lft forever 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:50.138 192.168.100.9' 00:19:50.138 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:50.138 192.168.100.9' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # head -n 1 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:50.139 192.168.100.9' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # tail -n +2 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # head -n 1 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=474312 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 474312 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 474312 ']' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 [2024-11-06 08:56:12.372923] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:50.139 [2024-11-06 08:56:12.372967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.139 [2024-11-06 08:56:12.445422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.139 [2024-11-06 08:56:12.488039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.139 [2024-11-06 08:56:12.488074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.139 [2024-11-06 08:56:12.488082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.139 [2024-11-06 08:56:12.488089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.139 [2024-11-06 08:56:12.488094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.139 [2024-11-06 08:56:12.489618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.139 [2024-11-06 08:56:12.489724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.139 [2024-11-06 08:56:12.489832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.139 [2024-11-06 08:56:12.489834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 [2024-11-06 08:56:12.656476] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xed1da0/0xed6290) succeed. 00:19:50.139 [2024-11-06 08:56:12.665532] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xed3430/0xf17930) succeed. 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 Malloc0 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 [2024-11-06 08:56:12.845586] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 [ 00:19:50.139 { 00:19:50.139 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:50.139 "subtype": "Discovery", 00:19:50.139 "listen_addresses": [], 00:19:50.139 "allow_any_host": true, 00:19:50.139 "hosts": [] 00:19:50.139 }, 00:19:50.139 { 00:19:50.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.139 "subtype": "NVMe", 00:19:50.139 "listen_addresses": [ 00:19:50.139 { 00:19:50.139 "trtype": "RDMA", 00:19:50.139 "adrfam": "IPv4", 00:19:50.139 "traddr": "192.168.100.8", 00:19:50.139 "trsvcid": "4420" 00:19:50.139 } 00:19:50.139 ], 00:19:50.139 "allow_any_host": true, 00:19:50.139 "hosts": [], 00:19:50.139 "serial_number": "SPDK00000000000001", 00:19:50.139 "model_number": "SPDK bdev Controller", 00:19:50.139 "max_namespaces": 2, 00:19:50.139 "min_cntlid": 1, 00:19:50.139 "max_cntlid": 65519, 00:19:50.139 "namespaces": [ 00:19:50.139 { 00:19:50.139 "nsid": 1, 00:19:50.139 "bdev_name": "Malloc0", 00:19:50.139 "name": "Malloc0", 00:19:50.139 "nguid": "957383B7F13E43D7BE6082DF10B2C96A", 00:19:50.139 "uuid": "957383b7-f13e-43d7-be60-82df10b2c96a" 00:19:50.139 } 00:19:50.139 ] 00:19:50.139 } 00:19:50.139 ] 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=474339 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:19:50.139 08:56:12 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 Malloc1 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.139 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.139 [ 00:19:50.139 { 00:19:50.140 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:50.140 "subtype": "Discovery", 00:19:50.140 "listen_addresses": [], 00:19:50.140 "allow_any_host": true, 00:19:50.140 "hosts": [] 00:19:50.140 }, 00:19:50.140 { 00:19:50.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.140 "subtype": "NVMe", 00:19:50.140 "listen_addresses": [ 00:19:50.140 { 00:19:50.140 "trtype": "RDMA", 00:19:50.140 "adrfam": "IPv4", 00:19:50.140 "traddr": "192.168.100.8", 00:19:50.140 "trsvcid": "4420" 00:19:50.140 } 00:19:50.140 ], 00:19:50.140 "allow_any_host": true, 00:19:50.140 "hosts": [], 00:19:50.140 "serial_number": "SPDK00000000000001", 00:19:50.140 "model_number": "SPDK bdev Controller", 00:19:50.140 "max_namespaces": 2, 00:19:50.140 "min_cntlid": 1, 00:19:50.140 "max_cntlid": 65519, 00:19:50.140 "namespaces": [ 00:19:50.140 { 00:19:50.140 "nsid": 1, 00:19:50.140 "bdev_name": "Malloc0", 00:19:50.140 "name": "Malloc0", 00:19:50.140 "nguid": "957383B7F13E43D7BE6082DF10B2C96A", 00:19:50.140 "uuid": "957383b7-f13e-43d7-be60-82df10b2c96a" 00:19:50.140 }, 00:19:50.140 { 00:19:50.140 "nsid": 2, 00:19:50.140 "bdev_name": "Malloc1", 00:19:50.140 "name": "Malloc1", 00:19:50.140 "nguid": "49DDE6942EF54EC598B91FDB85055054", 00:19:50.140 "uuid": "49dde694-2ef5-4ec5-98b9-1fdb85055054" 00:19:50.140 } 00:19:50.140 ] 00:19:50.140 } 00:19:50.140 ] 00:19:50.140 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.140 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 474339 00:19:50.399 Asynchronous Event Request test 00:19:50.399 Attaching to 192.168.100.8 00:19:50.399 Attached to 192.168.100.8 00:19:50.399 Registering asynchronous event callbacks... 00:19:50.399 Starting namespace attribute notice tests for all controllers... 00:19:50.399 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:50.399 aer_cb - Changed Namespace 00:19:50.399 Cleaning up... 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:50.399 rmmod nvme_rdma 00:19:50.399 rmmod nvme_fabrics 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 474312 ']' 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 474312 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 474312 ']' 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 474312 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474312 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474312' 00:19:50.399 killing process with pid 474312 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 474312 00:19:50.399 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 474312 00:19:50.659 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:50.659 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:50.659 00:19:50.659 real 0m7.320s 00:19:50.659 user 0m5.992s 00:19:50.659 sys 0m4.824s 00:19:50.659 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.659 08:56:13 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.659 ************************************ 00:19:50.659 END TEST nvmf_aer 00:19:50.659 ************************************ 00:19:50.659 08:56:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:19:50.659 08:56:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:50.659 08:56:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.659 08:56:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.659 ************************************ 00:19:50.659 START TEST nvmf_async_init 00:19:50.659 ************************************ 00:19:50.659 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:19:50.918 * Looking for test storage... 00:19:50.918 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # lcov --version 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:19:50.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.918 --rc genhtml_branch_coverage=1 00:19:50.918 --rc genhtml_function_coverage=1 00:19:50.918 --rc genhtml_legend=1 00:19:50.918 --rc geninfo_all_blocks=1 00:19:50.918 --rc geninfo_unexecuted_blocks=1 00:19:50.918 00:19:50.918 ' 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:19:50.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.918 --rc genhtml_branch_coverage=1 00:19:50.918 --rc genhtml_function_coverage=1 00:19:50.918 --rc genhtml_legend=1 00:19:50.918 --rc geninfo_all_blocks=1 00:19:50.918 --rc geninfo_unexecuted_blocks=1 00:19:50.918 00:19:50.918 ' 00:19:50.918 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:19:50.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.918 --rc genhtml_branch_coverage=1 00:19:50.918 --rc genhtml_function_coverage=1 00:19:50.918 --rc genhtml_legend=1 00:19:50.918 --rc geninfo_all_blocks=1 00:19:50.918 --rc geninfo_unexecuted_blocks=1 00:19:50.918 00:19:50.918 ' 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:19:50.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.919 --rc genhtml_branch_coverage=1 00:19:50.919 --rc genhtml_function_coverage=1 00:19:50.919 --rc genhtml_legend=1 00:19:50.919 --rc geninfo_all_blocks=1 00:19:50.919 --rc geninfo_unexecuted_blocks=1 00:19:50.919 00:19:50.919 ' 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.919 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=70bb5507c23a47d6ac9b61a35cd41b33 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:19:50.919 08:56:13 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:57.491 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:57.491 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:57.491 Found net devices under 0000:da:00.0: mlx_0_0 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:57.491 Found net devices under 0000:da:00.1: mlx_0_1 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # rdma_device_init 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:57.491 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:57.491 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:57.491 altname enp218s0f0np0 00:19:57.491 altname ens818f0np0 00:19:57.491 inet 192.168.100.8/24 scope global mlx_0_0 00:19:57.491 valid_lft forever preferred_lft forever 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:57.491 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:57.491 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:57.491 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:57.491 altname enp218s0f1np1 00:19:57.491 altname ens818f1np1 00:19:57.491 inet 192.168.100.9/24 scope global mlx_0_1 00:19:57.492 valid_lft forever preferred_lft forever 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:57.492 192.168.100.9' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:57.492 192.168.100.9' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # head -n 1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:57.492 192.168.100.9' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # tail -n +2 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # head -n 1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=477632 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 477632 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 477632 ']' 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 [2024-11-06 08:56:19.718811] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:57.492 [2024-11-06 08:56:19.718862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.492 [2024-11-06 08:56:19.792255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.492 [2024-11-06 08:56:19.833967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.492 [2024-11-06 08:56:19.834001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.492 [2024-11-06 08:56:19.834008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.492 [2024-11-06 08:56:19.834014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.492 [2024-11-06 08:56:19.834019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.492 [2024-11-06 08:56:19.834594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 [2024-11-06 08:56:19.990810] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e8bb40/0x1e90030) succeed. 00:19:57.492 [2024-11-06 08:56:19.999550] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e8cff0/0x1ed16d0) succeed. 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 null0 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 70bb5507c23a47d6ac9b61a35cd41b33 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 [2024-11-06 08:56:20.079487] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 nvme0n1 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 [ 00:19:57.492 { 00:19:57.492 "name": "nvme0n1", 00:19:57.492 "aliases": [ 00:19:57.492 "70bb5507-c23a-47d6-ac9b-61a35cd41b33" 00:19:57.492 ], 00:19:57.492 "product_name": "NVMe disk", 00:19:57.492 "block_size": 512, 00:19:57.492 "num_blocks": 2097152, 00:19:57.492 "uuid": "70bb5507-c23a-47d6-ac9b-61a35cd41b33", 00:19:57.492 "numa_id": 1, 00:19:57.492 "assigned_rate_limits": { 00:19:57.492 "rw_ios_per_sec": 0, 00:19:57.492 "rw_mbytes_per_sec": 0, 00:19:57.492 "r_mbytes_per_sec": 0, 00:19:57.492 "w_mbytes_per_sec": 0 00:19:57.492 }, 00:19:57.492 "claimed": false, 00:19:57.492 "zoned": false, 00:19:57.492 "supported_io_types": { 00:19:57.492 "read": true, 00:19:57.492 "write": true, 00:19:57.492 "unmap": false, 00:19:57.492 "flush": true, 00:19:57.492 "reset": true, 00:19:57.492 "nvme_admin": true, 00:19:57.492 "nvme_io": true, 00:19:57.492 "nvme_io_md": false, 00:19:57.492 "write_zeroes": true, 00:19:57.492 "zcopy": false, 00:19:57.492 "get_zone_info": false, 00:19:57.492 "zone_management": false, 00:19:57.492 "zone_append": false, 00:19:57.492 "compare": true, 00:19:57.492 "compare_and_write": true, 00:19:57.492 "abort": true, 00:19:57.492 "seek_hole": false, 00:19:57.492 "seek_data": false, 00:19:57.492 "copy": true, 00:19:57.492 "nvme_iov_md": false 00:19:57.492 }, 00:19:57.492 "memory_domains": [ 00:19:57.492 { 00:19:57.492 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:57.492 "dma_device_type": 0 00:19:57.492 } 00:19:57.492 ], 00:19:57.492 "driver_specific": { 00:19:57.492 "nvme": [ 00:19:57.492 { 00:19:57.492 "trid": { 00:19:57.492 "trtype": "RDMA", 00:19:57.492 "adrfam": "IPv4", 00:19:57.492 "traddr": "192.168.100.8", 00:19:57.492 "trsvcid": "4420", 00:19:57.492 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:57.492 }, 00:19:57.492 "ctrlr_data": { 00:19:57.492 "cntlid": 1, 00:19:57.492 "vendor_id": "0x8086", 00:19:57.492 "model_number": "SPDK bdev Controller", 00:19:57.492 "serial_number": "00000000000000000000", 00:19:57.492 "firmware_revision": "25.01", 00:19:57.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:57.492 "oacs": { 00:19:57.492 "security": 0, 00:19:57.492 "format": 0, 00:19:57.492 "firmware": 0, 00:19:57.492 "ns_manage": 0 00:19:57.492 }, 00:19:57.492 "multi_ctrlr": true, 00:19:57.492 "ana_reporting": false 00:19:57.492 }, 00:19:57.492 "vs": { 00:19:57.492 "nvme_version": "1.3" 00:19:57.492 }, 00:19:57.492 "ns_data": { 00:19:57.492 "id": 1, 00:19:57.492 "can_share": true 00:19:57.492 } 00:19:57.492 } 00:19:57.492 ], 00:19:57.492 "mp_policy": "active_passive" 00:19:57.492 } 00:19:57.492 } 00:19:57.492 ] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 [2024-11-06 08:56:20.194829] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:57.492 [2024-11-06 08:56:20.220057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.492 [2024-11-06 08:56:20.250225] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.492 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 [ 00:19:57.492 { 00:19:57.492 "name": "nvme0n1", 00:19:57.492 "aliases": [ 00:19:57.492 "70bb5507-c23a-47d6-ac9b-61a35cd41b33" 00:19:57.492 ], 00:19:57.492 "product_name": "NVMe disk", 00:19:57.492 "block_size": 512, 00:19:57.492 "num_blocks": 2097152, 00:19:57.492 "uuid": "70bb5507-c23a-47d6-ac9b-61a35cd41b33", 00:19:57.492 "numa_id": 1, 00:19:57.492 "assigned_rate_limits": { 00:19:57.492 "rw_ios_per_sec": 0, 00:19:57.492 "rw_mbytes_per_sec": 0, 00:19:57.492 "r_mbytes_per_sec": 0, 00:19:57.492 "w_mbytes_per_sec": 0 00:19:57.492 }, 00:19:57.492 "claimed": false, 00:19:57.492 "zoned": false, 00:19:57.492 "supported_io_types": { 00:19:57.492 "read": true, 00:19:57.492 "write": true, 00:19:57.492 "unmap": false, 00:19:57.492 "flush": true, 00:19:57.492 "reset": true, 00:19:57.492 "nvme_admin": true, 00:19:57.492 "nvme_io": true, 00:19:57.492 "nvme_io_md": false, 00:19:57.492 "write_zeroes": true, 00:19:57.492 "zcopy": false, 00:19:57.492 "get_zone_info": false, 00:19:57.492 "zone_management": false, 00:19:57.492 "zone_append": false, 00:19:57.492 "compare": true, 00:19:57.492 "compare_and_write": true, 00:19:57.492 "abort": true, 00:19:57.492 "seek_hole": false, 00:19:57.492 "seek_data": false, 00:19:57.492 "copy": true, 00:19:57.493 "nvme_iov_md": false 00:19:57.493 }, 00:19:57.493 "memory_domains": [ 00:19:57.493 { 00:19:57.493 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:57.493 "dma_device_type": 0 00:19:57.493 } 00:19:57.493 ], 00:19:57.493 "driver_specific": { 00:19:57.493 "nvme": [ 00:19:57.493 { 00:19:57.493 "trid": { 00:19:57.493 "trtype": "RDMA", 00:19:57.493 "adrfam": "IPv4", 00:19:57.493 "traddr": "192.168.100.8", 00:19:57.493 "trsvcid": "4420", 00:19:57.493 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:57.493 }, 00:19:57.493 "ctrlr_data": { 00:19:57.493 "cntlid": 2, 00:19:57.493 "vendor_id": "0x8086", 00:19:57.493 "model_number": "SPDK bdev Controller", 00:19:57.493 "serial_number": "00000000000000000000", 00:19:57.493 "firmware_revision": "25.01", 00:19:57.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:57.493 "oacs": { 00:19:57.493 "security": 0, 00:19:57.493 "format": 0, 00:19:57.493 "firmware": 0, 00:19:57.493 "ns_manage": 0 00:19:57.493 }, 00:19:57.493 "multi_ctrlr": true, 00:19:57.493 "ana_reporting": false 00:19:57.493 }, 00:19:57.493 "vs": { 00:19:57.493 "nvme_version": "1.3" 00:19:57.493 }, 00:19:57.493 "ns_data": { 00:19:57.493 "id": 1, 00:19:57.493 "can_share": true 00:19:57.493 } 00:19:57.493 } 00:19:57.493 ], 00:19:57.493 "mp_policy": "active_passive" 00:19:57.493 } 00:19:57.493 } 00:19:57.493 ] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.OWZboS5HK4 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.OWZboS5HK4 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.OWZboS5HK4 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.493 [2024-11-06 08:56:20.333065] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.493 [2024-11-06 08:56:20.353122] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.493 nvme0n1 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.493 [ 00:19:57.493 { 00:19:57.493 "name": "nvme0n1", 00:19:57.493 "aliases": [ 00:19:57.493 "70bb5507-c23a-47d6-ac9b-61a35cd41b33" 00:19:57.493 ], 00:19:57.493 "product_name": "NVMe disk", 00:19:57.493 "block_size": 512, 00:19:57.493 "num_blocks": 2097152, 00:19:57.493 "uuid": "70bb5507-c23a-47d6-ac9b-61a35cd41b33", 00:19:57.493 "numa_id": 1, 00:19:57.493 "assigned_rate_limits": { 00:19:57.493 "rw_ios_per_sec": 0, 00:19:57.493 "rw_mbytes_per_sec": 0, 00:19:57.493 "r_mbytes_per_sec": 0, 00:19:57.493 "w_mbytes_per_sec": 0 00:19:57.493 }, 00:19:57.493 "claimed": false, 00:19:57.493 "zoned": false, 00:19:57.493 "supported_io_types": { 00:19:57.493 "read": true, 00:19:57.493 "write": true, 00:19:57.493 "unmap": false, 00:19:57.493 "flush": true, 00:19:57.493 "reset": true, 00:19:57.493 "nvme_admin": true, 00:19:57.493 "nvme_io": true, 00:19:57.493 "nvme_io_md": false, 00:19:57.493 "write_zeroes": true, 00:19:57.493 "zcopy": false, 00:19:57.493 "get_zone_info": false, 00:19:57.493 "zone_management": false, 00:19:57.493 "zone_append": false, 00:19:57.493 "compare": true, 00:19:57.493 "compare_and_write": true, 00:19:57.493 "abort": true, 00:19:57.493 "seek_hole": false, 00:19:57.493 "seek_data": false, 00:19:57.493 "copy": true, 00:19:57.493 "nvme_iov_md": false 00:19:57.493 }, 00:19:57.493 "memory_domains": [ 00:19:57.493 { 00:19:57.493 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:57.493 "dma_device_type": 0 00:19:57.493 } 00:19:57.493 ], 00:19:57.493 "driver_specific": { 00:19:57.493 "nvme": [ 00:19:57.493 { 00:19:57.493 "trid": { 00:19:57.493 "trtype": "RDMA", 00:19:57.493 "adrfam": "IPv4", 00:19:57.493 "traddr": "192.168.100.8", 00:19:57.493 "trsvcid": "4421", 00:19:57.493 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:57.493 }, 00:19:57.493 "ctrlr_data": { 00:19:57.493 "cntlid": 3, 00:19:57.493 "vendor_id": "0x8086", 00:19:57.493 "model_number": "SPDK bdev Controller", 00:19:57.493 "serial_number": "00000000000000000000", 00:19:57.493 "firmware_revision": "25.01", 00:19:57.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:57.493 "oacs": { 00:19:57.493 "security": 0, 00:19:57.493 "format": 0, 00:19:57.493 "firmware": 0, 00:19:57.493 "ns_manage": 0 00:19:57.493 }, 00:19:57.493 "multi_ctrlr": true, 00:19:57.493 "ana_reporting": false 00:19:57.493 }, 00:19:57.493 "vs": { 00:19:57.493 "nvme_version": "1.3" 00:19:57.493 }, 00:19:57.493 "ns_data": { 00:19:57.493 "id": 1, 00:19:57.493 "can_share": true 00:19:57.493 } 00:19:57.493 } 00:19:57.493 ], 00:19:57.493 "mp_policy": "active_passive" 00:19:57.493 } 00:19:57.493 } 00:19:57.493 ] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.OWZboS5HK4 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:57.493 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:57.493 rmmod nvme_rdma 00:19:57.752 rmmod nvme_fabrics 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 477632 ']' 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 477632 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 477632 ']' 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 477632 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 477632 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 477632' 00:19:57.752 killing process with pid 477632 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 477632 00:19:57.752 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 477632 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:58.011 00:19:58.011 real 0m7.120s 00:19:58.011 user 0m2.889s 00:19:58.011 sys 0m4.744s 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.011 ************************************ 00:19:58.011 END TEST nvmf_async_init 00:19:58.011 ************************************ 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.011 ************************************ 00:19:58.011 START TEST dma 00:19:58.011 ************************************ 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:19:58.011 * Looking for test storage... 00:19:58.011 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1689 -- # lcov --version 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.011 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:19:58.012 08:56:20 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:19:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.012 --rc genhtml_branch_coverage=1 00:19:58.012 --rc genhtml_function_coverage=1 00:19:58.012 --rc genhtml_legend=1 00:19:58.012 --rc geninfo_all_blocks=1 00:19:58.012 --rc geninfo_unexecuted_blocks=1 00:19:58.012 00:19:58.012 ' 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:19:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.012 --rc genhtml_branch_coverage=1 00:19:58.012 --rc genhtml_function_coverage=1 00:19:58.012 --rc genhtml_legend=1 00:19:58.012 --rc geninfo_all_blocks=1 00:19:58.012 --rc geninfo_unexecuted_blocks=1 00:19:58.012 00:19:58.012 ' 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:19:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.012 --rc genhtml_branch_coverage=1 00:19:58.012 --rc genhtml_function_coverage=1 00:19:58.012 --rc genhtml_legend=1 00:19:58.012 --rc geninfo_all_blocks=1 00:19:58.012 --rc geninfo_unexecuted_blocks=1 00:19:58.012 00:19:58.012 ' 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:19:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.012 --rc genhtml_branch_coverage=1 00:19:58.012 --rc genhtml_function_coverage=1 00:19:58.012 --rc genhtml_legend=1 00:19:58.012 --rc geninfo_all_blocks=1 00:19:58.012 --rc geninfo_unexecuted_blocks=1 00:19:58.012 00:19:58.012 ' 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:58.012 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:58.271 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:19:58.271 08:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:19:58.272 08:56:21 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.682 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:03.943 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:03.943 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:03.943 Found net devices under 0000:da:00.0: mlx_0_0 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:03.943 Found net devices under 0000:da:00.1: mlx_0_1 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # is_hw=yes 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # rdma_device_init 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@528 -- # allocate_nic_ips 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:03.943 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:03.944 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:03.944 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:20:03.944 altname enp218s0f0np0 00:20:03.944 altname ens818f0np0 00:20:03.944 inet 192.168.100.8/24 scope global mlx_0_0 00:20:03.944 valid_lft forever preferred_lft forever 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:03.944 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:03.944 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:20:03.944 altname enp218s0f1np1 00:20:03.944 altname ens818f1np1 00:20:03.944 inet 192.168.100.9/24 scope global mlx_0_1 00:20:03.944 valid_lft forever preferred_lft forever 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # return 0 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:20:03.944 192.168.100.9' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:20:03.944 192.168.100.9' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # head -n 1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:20:03.944 192.168.100.9' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # tail -n +2 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # head -n 1 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # nvmfpid=480951 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # waitforlisten 480951 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 480951 ']' 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.944 08:56:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:04.203 [2024-11-06 08:56:26.964765] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:04.203 [2024-11-06 08:56:26.964817] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.203 [2024-11-06 08:56:27.040925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:04.203 [2024-11-06 08:56:27.080274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.203 [2024-11-06 08:56:27.080311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.203 [2024-11-06 08:56:27.080317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.203 [2024-11-06 08:56:27.080322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.203 [2024-11-06 08:56:27.080327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.203 [2024-11-06 08:56:27.081584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.203 [2024-11-06 08:56:27.081584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.203 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.203 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:20:04.203 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:04.203 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.203 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:04.463 [2024-11-06 08:56:27.246212] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x131f6e0/0x1323bd0) succeed. 00:20:04.463 [2024-11-06 08:56:27.255008] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1320c30/0x1365270) succeed. 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:04.463 Malloc0 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:04.463 [2024-11-06 08:56:27.409272] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # config=() 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # local subsystem config 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:04.463 { 00:20:04.463 "params": { 00:20:04.463 "name": "Nvme$subsystem", 00:20:04.463 "trtype": "$TEST_TRANSPORT", 00:20:04.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.463 "adrfam": "ipv4", 00:20:04.463 "trsvcid": "$NVMF_PORT", 00:20:04.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.463 "hdgst": ${hdgst:-false}, 00:20:04.463 "ddgst": ${ddgst:-false} 00:20:04.463 }, 00:20:04.463 "method": "bdev_nvme_attach_controller" 00:20:04.463 } 00:20:04.463 EOF 00:20:04.463 )") 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # cat 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # jq . 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@583 -- # IFS=, 00:20:04.463 08:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:04.463 "params": { 00:20:04.463 "name": "Nvme0", 00:20:04.463 "trtype": "rdma", 00:20:04.463 "traddr": "192.168.100.8", 00:20:04.463 "adrfam": "ipv4", 00:20:04.463 "trsvcid": "4420", 00:20:04.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:04.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:04.463 "hdgst": false, 00:20:04.463 "ddgst": false 00:20:04.463 }, 00:20:04.463 "method": "bdev_nvme_attach_controller" 00:20:04.463 }' 00:20:04.463 [2024-11-06 08:56:27.458930] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:04.463 [2024-11-06 08:56:27.458972] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480973 ] 00:20:04.721 [2024-11-06 08:56:27.534049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:04.721 [2024-11-06 08:56:27.575483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.721 [2024-11-06 08:56:27.575486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.991 bdev Nvme0n1 reports 1 memory domains 00:20:09.991 bdev Nvme0n1 supports RDMA memory domain 00:20:09.991 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:09.991 ========================================================================== 00:20:09.991 Latency [us] 00:20:09.991 IOPS MiB/s Average min max 00:20:09.991 Core 2: 21002.52 82.04 761.13 256.72 8763.93 00:20:09.991 Core 3: 21072.10 82.31 758.63 253.69 8792.54 00:20:09.991 ========================================================================== 00:20:09.991 Total : 42074.62 164.35 759.87 253.69 8792.54 00:20:09.991 00:20:09.991 Total operations: 210410, translate 210410 pull_push 0 memzero 0 00:20:09.991 08:56:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:20:09.991 08:56:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:20:09.991 08:56:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:20:09.991 [2024-11-06 08:56:32.982721] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:09.991 [2024-11-06 08:56:32.982775] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481900 ] 00:20:10.251 [2024-11-06 08:56:33.057667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:10.251 [2024-11-06 08:56:33.096847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.251 [2024-11-06 08:56:33.096850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.523 bdev Malloc0 reports 2 memory domains 00:20:15.523 bdev Malloc0 doesn't support RDMA memory domain 00:20:15.523 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:15.523 ========================================================================== 00:20:15.523 Latency [us] 00:20:15.523 IOPS MiB/s Average min max 00:20:15.523 Core 2: 14095.17 55.06 1134.43 432.07 1428.82 00:20:15.523 Core 3: 13970.22 54.57 1144.57 491.80 2085.23 00:20:15.523 ========================================================================== 00:20:15.523 Total : 28065.39 109.63 1139.48 432.07 2085.23 00:20:15.523 00:20:15.523 Total operations: 140379, translate 0 pull_push 561516 memzero 0 00:20:15.523 08:56:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:20:15.523 08:56:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:20:15.523 08:56:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:15.523 08:56:38 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:20:15.523 Ignoring -M option 00:20:15.523 [2024-11-06 08:56:38.417236] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:15.523 [2024-11-06 08:56:38.417289] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482811 ] 00:20:15.523 [2024-11-06 08:56:38.493517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:15.523 [2024-11-06 08:56:38.532306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.523 [2024-11-06 08:56:38.532308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.090 bdev b1c6d9d3-76e6-4577-8c8e-e391eb3ef6c4 reports 1 memory domains 00:20:22.090 bdev b1c6d9d3-76e6-4577-8c8e-e391eb3ef6c4 supports RDMA memory domain 00:20:22.090 Initialization complete, running randread IO for 5 sec on 2 cores 00:20:22.090 ========================================================================== 00:20:22.090 Latency [us] 00:20:22.090 IOPS MiB/s Average min max 00:20:22.090 Core 2: 67603.85 264.08 235.71 92.62 3527.25 00:20:22.090 Core 3: 66059.65 258.05 241.19 75.41 3475.48 00:20:22.090 ========================================================================== 00:20:22.090 Total : 133663.50 522.12 238.41 75.41 3527.25 00:20:22.090 00:20:22.090 Total operations: 668406, translate 0 pull_push 0 memzero 668406 00:20:22.090 08:56:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:20:22.090 [2024-11-06 08:56:44.078916] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:23.468 Initializing NVMe Controllers 00:20:23.468 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:20:23.468 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:20:23.468 Initialization complete. Launching workers. 00:20:23.468 ======================================================== 00:20:23.468 Latency(us) 00:20:23.468 Device Information : IOPS MiB/s Average min max 00:20:23.468 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2024.69 7.91 7957.39 4974.13 10971.35 00:20:23.468 ======================================================== 00:20:23.468 Total : 2024.69 7.91 7957.39 4974.13 10971.35 00:20:23.468 00:20:23.468 08:56:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:20:23.468 08:56:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:20:23.468 08:56:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:23.468 08:56:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:20:23.468 [2024-11-06 08:56:46.420744] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:23.468 [2024-11-06 08:56:46.420787] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484185 ] 00:20:23.727 [2024-11-06 08:56:46.497256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:23.727 [2024-11-06 08:56:46.538624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.727 [2024-11-06 08:56:46.538626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.001 bdev cff901a2-588f-4ae8-af63-9104e70c2d1a reports 1 memory domains 00:20:29.001 bdev cff901a2-588f-4ae8-af63-9104e70c2d1a supports RDMA memory domain 00:20:29.001 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:29.001 ========================================================================== 00:20:29.001 Latency [us] 00:20:29.001 IOPS MiB/s Average min max 00:20:29.001 Core 2: 18438.42 72.03 866.88 37.93 11705.12 00:20:29.001 Core 3: 18618.95 72.73 858.56 13.26 11295.60 00:20:29.001 ========================================================================== 00:20:29.001 Total : 37057.37 144.76 862.70 13.26 11705.12 00:20:29.001 00:20:29.001 Total operations: 185351, translate 185248 pull_push 0 memzero 103 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:29.001 rmmod nvme_rdma 00:20:29.001 rmmod nvme_fabrics 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@515 -- # '[' -n 480951 ']' 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # killprocess 480951 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 480951 ']' 00:20:29.001 08:56:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 480951 00:20:29.001 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:20:29.001 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:29.001 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480951 00:20:29.260 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:29.260 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:29.260 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480951' 00:20:29.260 killing process with pid 480951 00:20:29.260 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 480951 00:20:29.260 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 480951 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:20:29.519 00:20:29.519 real 0m31.502s 00:20:29.519 user 1m34.851s 00:20:29.519 sys 0m5.486s 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:29.519 ************************************ 00:20:29.519 END TEST dma 00:20:29.519 ************************************ 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.519 ************************************ 00:20:29.519 START TEST nvmf_identify 00:20:29.519 ************************************ 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:29.519 * Looking for test storage... 00:20:29.519 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # lcov --version 00:20:29.519 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:20:29.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.779 --rc genhtml_branch_coverage=1 00:20:29.779 --rc genhtml_function_coverage=1 00:20:29.779 --rc genhtml_legend=1 00:20:29.779 --rc geninfo_all_blocks=1 00:20:29.779 --rc geninfo_unexecuted_blocks=1 00:20:29.779 00:20:29.779 ' 00:20:29.779 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:20:29.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.779 --rc genhtml_branch_coverage=1 00:20:29.779 --rc genhtml_function_coverage=1 00:20:29.779 --rc genhtml_legend=1 00:20:29.779 --rc geninfo_all_blocks=1 00:20:29.779 --rc geninfo_unexecuted_blocks=1 00:20:29.779 00:20:29.779 ' 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:20:29.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.780 --rc genhtml_branch_coverage=1 00:20:29.780 --rc genhtml_function_coverage=1 00:20:29.780 --rc genhtml_legend=1 00:20:29.780 --rc geninfo_all_blocks=1 00:20:29.780 --rc geninfo_unexecuted_blocks=1 00:20:29.780 00:20:29.780 ' 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:20:29.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.780 --rc genhtml_branch_coverage=1 00:20:29.780 --rc genhtml_function_coverage=1 00:20:29.780 --rc genhtml_legend=1 00:20:29.780 --rc geninfo_all_blocks=1 00:20:29.780 --rc geninfo_unexecuted_blocks=1 00:20:29.780 00:20:29.780 ' 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:29.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:20:29.780 08:56:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.358 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:36.359 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:36.359 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:36.359 Found net devices under 0000:da:00.0: mlx_0_0 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:36.359 Found net devices under 0000:da:00.1: mlx_0_1 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # rdma_device_init 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@528 -- # allocate_nic_ips 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:36.359 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:36.359 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:20:36.359 altname enp218s0f0np0 00:20:36.359 altname ens818f0np0 00:20:36.359 inet 192.168.100.8/24 scope global mlx_0_0 00:20:36.359 valid_lft forever preferred_lft forever 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:36.359 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:36.359 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:20:36.359 altname enp218s0f1np1 00:20:36.359 altname ens818f1np1 00:20:36.359 inet 192.168.100.9/24 scope global mlx_0_1 00:20:36.359 valid_lft forever preferred_lft forever 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:20:36.359 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:20:36.360 192.168.100.9' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:20:36.360 192.168.100.9' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # head -n 1 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:20:36.360 192.168.100.9' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # tail -n +2 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # head -n 1 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=488179 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 488179 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 488179 ']' 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 [2024-11-06 08:56:58.588387] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:36.360 [2024-11-06 08:56:58.588431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.360 [2024-11-06 08:56:58.663173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.360 [2024-11-06 08:56:58.706002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.360 [2024-11-06 08:56:58.706037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.360 [2024-11-06 08:56:58.706044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.360 [2024-11-06 08:56:58.706050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.360 [2024-11-06 08:56:58.706055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.360 [2024-11-06 08:56:58.707573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.360 [2024-11-06 08:56:58.707683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.360 [2024-11-06 08:56:58.707790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.360 [2024-11-06 08:56:58.707791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 [2024-11-06 08:56:58.828512] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22e9da0/0x22ee290) succeed. 00:20:36.360 [2024-11-06 08:56:58.837522] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22eb430/0x232f930) succeed. 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:36.360 08:56:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 Malloc0 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 [2024-11-06 08:56:59.063682] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:36.360 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.361 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.361 [ 00:20:36.361 { 00:20:36.361 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:36.361 "subtype": "Discovery", 00:20:36.361 "listen_addresses": [ 00:20:36.361 { 00:20:36.361 "trtype": "RDMA", 00:20:36.361 "adrfam": "IPv4", 00:20:36.361 "traddr": "192.168.100.8", 00:20:36.361 "trsvcid": "4420" 00:20:36.361 } 00:20:36.361 ], 00:20:36.361 "allow_any_host": true, 00:20:36.361 "hosts": [] 00:20:36.361 }, 00:20:36.361 { 00:20:36.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.361 "subtype": "NVMe", 00:20:36.361 "listen_addresses": [ 00:20:36.361 { 00:20:36.361 "trtype": "RDMA", 00:20:36.361 "adrfam": "IPv4", 00:20:36.361 "traddr": "192.168.100.8", 00:20:36.361 "trsvcid": "4420" 00:20:36.361 } 00:20:36.361 ], 00:20:36.361 "allow_any_host": true, 00:20:36.361 "hosts": [], 00:20:36.361 "serial_number": "SPDK00000000000001", 00:20:36.361 "model_number": "SPDK bdev Controller", 00:20:36.361 "max_namespaces": 32, 00:20:36.361 "min_cntlid": 1, 00:20:36.361 "max_cntlid": 65519, 00:20:36.361 "namespaces": [ 00:20:36.361 { 00:20:36.361 "nsid": 1, 00:20:36.361 "bdev_name": "Malloc0", 00:20:36.361 "name": "Malloc0", 00:20:36.361 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:36.361 "eui64": "ABCDEF0123456789", 00:20:36.361 "uuid": "d0ff3284-7ef5-431e-b160-5fff3de4be16" 00:20:36.361 } 00:20:36.361 ] 00:20:36.361 } 00:20:36.361 ] 00:20:36.361 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.361 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:36.361 [2024-11-06 08:56:59.114526] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:36.361 [2024-11-06 08:56:59.114559] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488326 ] 00:20:36.361 [2024-11-06 08:56:59.170636] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:36.361 [2024-11-06 08:56:59.170704] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:36.361 [2024-11-06 08:56:59.170724] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:36.361 [2024-11-06 08:56:59.170728] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:36.361 [2024-11-06 08:56:59.170758] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:36.361 [2024-11-06 08:56:59.189682] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:36.361 [2024-11-06 08:56:59.203960] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:36.361 [2024-11-06 08:56:59.203970] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:36.361 [2024-11-06 08:56:59.203977] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.203982] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.203987] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.203991] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.203995] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204000] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204004] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204009] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204013] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204017] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204022] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204026] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204030] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204034] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204039] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204043] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204047] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204052] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204059] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204064] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204068] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204073] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204077] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204081] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204085] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204090] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204094] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204098] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204103] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204107] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204111] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204115] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:36.361 [2024-11-06 08:56:59.204120] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:36.361 [2024-11-06 08:56:59.204123] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:36.361 [2024-11-06 08:56:59.204141] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.204153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x180e00 00:20:36.361 [2024-11-06 08:56:59.209210] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.361 [2024-11-06 08:56:59.209220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:36.361 [2024-11-06 08:56:59.209226] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.209232] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:36.361 [2024-11-06 08:56:59.209239] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:36.361 [2024-11-06 08:56:59.209244] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:36.361 [2024-11-06 08:56:59.209255] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.209262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.361 [2024-11-06 08:56:59.209290] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.361 [2024-11-06 08:56:59.209295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:36.361 [2024-11-06 08:56:59.209300] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:36.361 [2024-11-06 08:56:59.209304] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.209311] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:36.361 [2024-11-06 08:56:59.209318] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.209324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.361 [2024-11-06 08:56:59.209347] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.361 [2024-11-06 08:56:59.209351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:36.361 [2024-11-06 08:56:59.209357] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:36.361 [2024-11-06 08:56:59.209361] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.209366] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:36.361 [2024-11-06 08:56:59.209372] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.361 [2024-11-06 08:56:59.209378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.361 [2024-11-06 08:56:59.209398] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.362 [2024-11-06 08:56:59.209402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:36.362 [2024-11-06 08:56:59.209407] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:36.362 [2024-11-06 08:56:59.209411] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209418] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.362 [2024-11-06 08:56:59.209442] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.362 [2024-11-06 08:56:59.209447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:36.362 [2024-11-06 08:56:59.209452] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:36.362 [2024-11-06 08:56:59.209456] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:36.362 [2024-11-06 08:56:59.209460] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209465] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:36.362 [2024-11-06 08:56:59.209569] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:36.362 [2024-11-06 08:56:59.209574] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:36.362 [2024-11-06 08:56:59.209582] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.362 [2024-11-06 08:56:59.209605] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.362 [2024-11-06 08:56:59.209611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:36.362 [2024-11-06 08:56:59.209615] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:36.362 [2024-11-06 08:56:59.209619] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209626] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.362 [2024-11-06 08:56:59.209647] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.362 [2024-11-06 08:56:59.209651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:36.362 [2024-11-06 08:56:59.209656] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:36.362 [2024-11-06 08:56:59.209660] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:36.362 [2024-11-06 08:56:59.209664] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209669] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:36.362 [2024-11-06 08:56:59.209676] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:36.362 [2024-11-06 08:56:59.209684] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180e00 00:20:36.362 [2024-11-06 08:56:59.209719] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.362 [2024-11-06 08:56:59.209723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:36.362 [2024-11-06 08:56:59.209731] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:36.362 [2024-11-06 08:56:59.209735] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:36.362 [2024-11-06 08:56:59.209739] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:36.362 [2024-11-06 08:56:59.209743] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:36.362 [2024-11-06 08:56:59.209747] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:36.362 [2024-11-06 08:56:59.209751] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:36.362 [2024-11-06 08:56:59.209755] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209764] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:36.362 [2024-11-06 08:56:59.209772] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.362 [2024-11-06 08:56:59.209798] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.362 [2024-11-06 08:56:59.209804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:36.362 [2024-11-06 08:56:59.209811] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.362 [2024-11-06 08:56:59.209822] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.362 [2024-11-06 08:56:59.209832] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.362 [2024-11-06 08:56:59.209842] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.362 [2024-11-06 08:56:59.209851] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:36.362 [2024-11-06 08:56:59.209855] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209864] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:36.362 [2024-11-06 08:56:59.209869] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.362 [2024-11-06 08:56:59.209894] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.362 [2024-11-06 08:56:59.209898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:36.362 [2024-11-06 08:56:59.209904] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:36.362 [2024-11-06 08:56:59.209908] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:36.362 [2024-11-06 08:56:59.209912] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209919] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180e00 00:20:36.362 [2024-11-06 08:56:59.209949] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.362 [2024-11-06 08:56:59.209954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:36.362 [2024-11-06 08:56:59.209959] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209967] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:36.362 [2024-11-06 08:56:59.209990] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.209997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x180e00 00:20:36.362 [2024-11-06 08:56:59.210005] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.210010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.362 [2024-11-06 08:56:59.210026] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.362 [2024-11-06 08:56:59.210030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:36.362 [2024-11-06 08:56:59.210040] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x180e00 00:20:36.362 [2024-11-06 08:56:59.210045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180e00 00:20:36.362 [2024-11-06 08:56:59.210050] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x180e00 00:20:36.363 [2024-11-06 08:56:59.210055] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.363 [2024-11-06 08:56:59.210059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:36.363 [2024-11-06 08:56:59.210063] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x180e00 00:20:36.363 [2024-11-06 08:56:59.210078] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.363 [2024-11-06 08:56:59.210082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:36.363 [2024-11-06 08:56:59.210090] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180e00 00:20:36.363 [2024-11-06 08:56:59.210095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180e00 00:20:36.363 [2024-11-06 08:56:59.210100] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x180e00 00:20:36.363 [2024-11-06 08:56:59.210123] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.363 [2024-11-06 08:56:59.210128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:36.363 [2024-11-06 08:56:59.210137] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x180e00 00:20:36.363 ===================================================== 00:20:36.363 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:36.363 ===================================================== 00:20:36.363 Controller Capabilities/Features 00:20:36.363 ================================ 00:20:36.363 Vendor ID: 0000 00:20:36.363 Subsystem Vendor ID: 0000 00:20:36.363 Serial Number: .................... 00:20:36.363 Model Number: ........................................ 00:20:36.363 Firmware Version: 25.01 00:20:36.363 Recommended Arb Burst: 0 00:20:36.363 IEEE OUI Identifier: 00 00 00 00:20:36.363 Multi-path I/O 00:20:36.363 May have multiple subsystem ports: No 00:20:36.363 May have multiple controllers: No 00:20:36.363 Associated with SR-IOV VF: No 00:20:36.363 Max Data Transfer Size: 131072 00:20:36.363 Max Number of Namespaces: 0 00:20:36.363 Max Number of I/O Queues: 1024 00:20:36.363 NVMe Specification Version (VS): 1.3 00:20:36.363 NVMe Specification Version (Identify): 1.3 00:20:36.363 Maximum Queue Entries: 128 00:20:36.363 Contiguous Queues Required: Yes 00:20:36.363 Arbitration Mechanisms Supported 00:20:36.363 Weighted Round Robin: Not Supported 00:20:36.363 Vendor Specific: Not Supported 00:20:36.363 Reset Timeout: 15000 ms 00:20:36.363 Doorbell Stride: 4 bytes 00:20:36.363 NVM Subsystem Reset: Not Supported 00:20:36.363 Command Sets Supported 00:20:36.363 NVM Command Set: Supported 00:20:36.363 Boot Partition: Not Supported 00:20:36.363 Memory Page Size Minimum: 4096 bytes 00:20:36.363 Memory Page Size Maximum: 4096 bytes 00:20:36.363 Persistent Memory Region: Not Supported 00:20:36.363 Optional Asynchronous Events Supported 00:20:36.363 Namespace Attribute Notices: Not Supported 00:20:36.363 Firmware Activation Notices: Not Supported 00:20:36.363 ANA Change Notices: Not Supported 00:20:36.363 PLE Aggregate Log Change Notices: Not Supported 00:20:36.363 LBA Status Info Alert Notices: Not Supported 00:20:36.363 EGE Aggregate Log Change Notices: Not Supported 00:20:36.363 Normal NVM Subsystem Shutdown event: Not Supported 00:20:36.363 Zone Descriptor Change Notices: Not Supported 00:20:36.363 Discovery Log Change Notices: Supported 00:20:36.363 Controller Attributes 00:20:36.363 128-bit Host Identifier: Not Supported 00:20:36.363 Non-Operational Permissive Mode: Not Supported 00:20:36.363 NVM Sets: Not Supported 00:20:36.363 Read Recovery Levels: Not Supported 00:20:36.363 Endurance Groups: Not Supported 00:20:36.363 Predictable Latency Mode: Not Supported 00:20:36.363 Traffic Based Keep ALive: Not Supported 00:20:36.363 Namespace Granularity: Not Supported 00:20:36.363 SQ Associations: Not Supported 00:20:36.363 UUID List: Not Supported 00:20:36.363 Multi-Domain Subsystem: Not Supported 00:20:36.363 Fixed Capacity Management: Not Supported 00:20:36.363 Variable Capacity Management: Not Supported 00:20:36.363 Delete Endurance Group: Not Supported 00:20:36.363 Delete NVM Set: Not Supported 00:20:36.363 Extended LBA Formats Supported: Not Supported 00:20:36.363 Flexible Data Placement Supported: Not Supported 00:20:36.363 00:20:36.363 Controller Memory Buffer Support 00:20:36.363 ================================ 00:20:36.363 Supported: No 00:20:36.363 00:20:36.363 Persistent Memory Region Support 00:20:36.363 ================================ 00:20:36.363 Supported: No 00:20:36.363 00:20:36.363 Admin Command Set Attributes 00:20:36.363 ============================ 00:20:36.363 Security Send/Receive: Not Supported 00:20:36.363 Format NVM: Not Supported 00:20:36.363 Firmware Activate/Download: Not Supported 00:20:36.363 Namespace Management: Not Supported 00:20:36.363 Device Self-Test: Not Supported 00:20:36.363 Directives: Not Supported 00:20:36.363 NVMe-MI: Not Supported 00:20:36.363 Virtualization Management: Not Supported 00:20:36.363 Doorbell Buffer Config: Not Supported 00:20:36.363 Get LBA Status Capability: Not Supported 00:20:36.363 Command & Feature Lockdown Capability: Not Supported 00:20:36.363 Abort Command Limit: 1 00:20:36.363 Async Event Request Limit: 4 00:20:36.363 Number of Firmware Slots: N/A 00:20:36.363 Firmware Slot 1 Read-Only: N/A 00:20:36.363 Firmware Activation Without Reset: N/A 00:20:36.363 Multiple Update Detection Support: N/A 00:20:36.363 Firmware Update Granularity: No Information Provided 00:20:36.363 Per-Namespace SMART Log: No 00:20:36.363 Asymmetric Namespace Access Log Page: Not Supported 00:20:36.363 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:36.363 Command Effects Log Page: Not Supported 00:20:36.363 Get Log Page Extended Data: Supported 00:20:36.363 Telemetry Log Pages: Not Supported 00:20:36.363 Persistent Event Log Pages: Not Supported 00:20:36.363 Supported Log Pages Log Page: May Support 00:20:36.363 Commands Supported & Effects Log Page: Not Supported 00:20:36.363 Feature Identifiers & Effects Log Page:May Support 00:20:36.363 NVMe-MI Commands & Effects Log Page: May Support 00:20:36.363 Data Area 4 for Telemetry Log: Not Supported 00:20:36.363 Error Log Page Entries Supported: 128 00:20:36.363 Keep Alive: Not Supported 00:20:36.363 00:20:36.363 NVM Command Set Attributes 00:20:36.363 ========================== 00:20:36.363 Submission Queue Entry Size 00:20:36.363 Max: 1 00:20:36.363 Min: 1 00:20:36.363 Completion Queue Entry Size 00:20:36.363 Max: 1 00:20:36.363 Min: 1 00:20:36.363 Number of Namespaces: 0 00:20:36.363 Compare Command: Not Supported 00:20:36.363 Write Uncorrectable Command: Not Supported 00:20:36.363 Dataset Management Command: Not Supported 00:20:36.363 Write Zeroes Command: Not Supported 00:20:36.363 Set Features Save Field: Not Supported 00:20:36.363 Reservations: Not Supported 00:20:36.363 Timestamp: Not Supported 00:20:36.363 Copy: Not Supported 00:20:36.363 Volatile Write Cache: Not Present 00:20:36.363 Atomic Write Unit (Normal): 1 00:20:36.363 Atomic Write Unit (PFail): 1 00:20:36.363 Atomic Compare & Write Unit: 1 00:20:36.363 Fused Compare & Write: Supported 00:20:36.363 Scatter-Gather List 00:20:36.363 SGL Command Set: Supported 00:20:36.363 SGL Keyed: Supported 00:20:36.363 SGL Bit Bucket Descriptor: Not Supported 00:20:36.363 SGL Metadata Pointer: Not Supported 00:20:36.363 Oversized SGL: Not Supported 00:20:36.363 SGL Metadata Address: Not Supported 00:20:36.364 SGL Offset: Supported 00:20:36.364 Transport SGL Data Block: Not Supported 00:20:36.364 Replay Protected Memory Block: Not Supported 00:20:36.364 00:20:36.364 Firmware Slot Information 00:20:36.364 ========================= 00:20:36.364 Active slot: 0 00:20:36.364 00:20:36.364 00:20:36.364 Error Log 00:20:36.364 ========= 00:20:36.364 00:20:36.364 Active Namespaces 00:20:36.364 ================= 00:20:36.364 Discovery Log Page 00:20:36.364 ================== 00:20:36.364 Generation Counter: 2 00:20:36.364 Number of Records: 2 00:20:36.364 Record Format: 0 00:20:36.364 00:20:36.364 Discovery Log Entry 0 00:20:36.364 ---------------------- 00:20:36.364 Transport Type: 1 (RDMA) 00:20:36.364 Address Family: 1 (IPv4) 00:20:36.364 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:36.364 Entry Flags: 00:20:36.364 Duplicate Returned Information: 1 00:20:36.364 Explicit Persistent Connection Support for Discovery: 1 00:20:36.364 Transport Requirements: 00:20:36.364 Secure Channel: Not Required 00:20:36.364 Port ID: 0 (0x0000) 00:20:36.364 Controller ID: 65535 (0xffff) 00:20:36.364 Admin Max SQ Size: 128 00:20:36.364 Transport Service Identifier: 4420 00:20:36.364 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:36.364 Transport Address: 192.168.100.8 00:20:36.364 Transport Specific Address Subtype - RDMA 00:20:36.364 RDMA QP Service Type: 1 (Reliable Connected) 00:20:36.364 RDMA Provider Type: 1 (No provider specified) 00:20:36.364 RDMA CM Service: 1 (RDMA_CM) 00:20:36.364 Discovery Log Entry 1 00:20:36.364 ---------------------- 00:20:36.364 Transport Type: 1 (RDMA) 00:20:36.364 Address Family: 1 (IPv4) 00:20:36.364 Subsystem Type: 2 (NVM Subsystem) 00:20:36.364 Entry Flags: 00:20:36.364 Duplicate Returned Information: 0 00:20:36.364 Explicit Persistent Connection Support for Discovery: 0 00:20:36.364 Transport Requirements: 00:20:36.364 Secure Channel: Not Required 00:20:36.364 Port ID: 0 (0x0000) 00:20:36.364 Controller ID: 65535 (0xffff) 00:20:36.364 Admin Max SQ Size: [2024-11-06 08:56:59.210200] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:36.364 [2024-11-06 08:56:59.210214] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 37311 doesn't match qid 00:20:36.364 [2024-11-06 08:56:59.210226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:84dcfed0 sqhd:0c30 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210231] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 37311 doesn't match qid 00:20:36.364 [2024-11-06 08:56:59.210237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:84dcfed0 sqhd:0c30 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210241] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 37311 doesn't match qid 00:20:36.364 [2024-11-06 08:56:59.210247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:84dcfed0 sqhd:0c30 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210251] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 37311 doesn't match qid 00:20:36.364 [2024-11-06 08:56:59.210256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:84dcfed0 sqhd:0c30 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210263] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210289] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210300] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210310] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210333] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210343] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:36.364 [2024-11-06 08:56:59.210347] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:36.364 [2024-11-06 08:56:59.210351] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210358] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210380] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210389] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210396] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210418] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210427] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210434] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210456] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210465] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210471] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210493] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210504] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210511] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210535] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210544] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210551] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210573] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210582] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210589] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210609] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210618] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210625] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.364 [2024-11-06 08:56:59.210631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.364 [2024-11-06 08:56:59.210648] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.364 [2024-11-06 08:56:59.210652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:36.364 [2024-11-06 08:56:59.210657] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210664] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.210689] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.210693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.210697] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210704] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.210729] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.210736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.210740] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210747] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.210770] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.210775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.210779] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210786] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.210811] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.210815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.210819] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210826] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.210853] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.210857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.210861] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210868] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.210893] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.210897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.210901] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210908] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.210930] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.210934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.210938] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210945] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.210967] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.210972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.210976] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210983] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.210989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211006] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.211010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.211015] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211021] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211046] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.211050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.211055] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211062] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211086] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.211091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.211095] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211102] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211123] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.211128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.211132] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211139] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211162] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.211166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.211170] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211177] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211206] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.211211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.211216] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211223] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211244] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.211248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.211253] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211260] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211284] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.211289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.211293] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211300] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211325] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.365 [2024-11-06 08:56:59.211329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:36.365 [2024-11-06 08:56:59.211333] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211340] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.365 [2024-11-06 08:56:59.211346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.365 [2024-11-06 08:56:59.211367] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211375] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211382] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211404] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211412] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211419] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211444] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211452] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211459] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211485] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211494] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211501] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211525] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211534] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211541] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211567] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211575] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211582] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211610] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211619] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211626] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211649] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211657] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211664] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211685] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211694] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211701] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211727] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211736] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211743] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211764] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211773] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211780] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211803] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211812] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211819] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211839] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211848] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211854] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211882] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211891] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211898] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211921] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211929] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211936] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211958] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.211962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.211967] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211974] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.211980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.366 [2024-11-06 08:56:59.211997] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.366 [2024-11-06 08:56:59.212001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:36.366 [2024-11-06 08:56:59.212005] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.212012] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.366 [2024-11-06 08:56:59.212018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212037] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212046] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212053] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212081] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212089] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212096] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212123] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212131] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212140] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212165] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212173] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212180] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212211] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212220] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212227] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212253] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212262] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212269] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212290] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212299] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212306] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212327] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212336] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212343] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212370] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212379] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212387] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212412] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212421] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212427] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212452] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212461] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212468] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212491] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212499] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212506] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212532] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212541] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212548] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212568] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212577] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212583] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212610] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212620] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212627] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212651] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212660] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212667] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212692] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212700] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212707] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.367 [2024-11-06 08:56:59.212730] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.367 [2024-11-06 08:56:59.212734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:36.367 [2024-11-06 08:56:59.212739] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212746] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.367 [2024-11-06 08:56:59.212752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.212774] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.212778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.212782] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212789] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.212817] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.212821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.212826] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212832] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.212859] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.212863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.212869] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212876] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.212897] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.212902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.212906] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212913] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.212937] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.212942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.212946] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212953] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.212975] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.212979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.212984] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212990] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.212996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.213012] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.213016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.213020] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213027] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.213048] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.213053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.213057] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213064] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.213089] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.213094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.213099] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213105] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.213135] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.213139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.213144] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213151] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.213175] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.213179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.213184] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213191] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.213197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.217210] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.217217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.217221] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.217228] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.217234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.368 [2024-11-06 08:56:59.217255] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.368 [2024-11-06 08:56:59.217259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001a p:0 m:0 dnr:0 00:20:36.368 [2024-11-06 08:56:59.217263] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x180e00 00:20:36.368 [2024-11-06 08:56:59.217268] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:20:36.368 128 00:20:36.368 Transport Service Identifier: 4420 00:20:36.368 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:36.368 Transport Address: 192.168.100.8 00:20:36.368 Transport Specific Address Subtype - RDMA 00:20:36.368 RDMA QP Service Type: 1 (Reliable Connected) 00:20:36.368 RDMA Provider Type: 1 (No provider specified) 00:20:36.368 RDMA CM Service: 1 (RDMA_CM) 00:20:36.368 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:36.369 [2024-11-06 08:56:59.287239] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:36.369 [2024-11-06 08:56:59.287273] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488418 ] 00:20:36.369 [2024-11-06 08:56:59.342271] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:36.369 [2024-11-06 08:56:59.342333] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:36.369 [2024-11-06 08:56:59.342347] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:36.369 [2024-11-06 08:56:59.342350] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:36.369 [2024-11-06 08:56:59.342372] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:36.369 [2024-11-06 08:56:59.353660] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:36.369 [2024-11-06 08:56:59.363912] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:36.369 [2024-11-06 08:56:59.363921] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:36.369 [2024-11-06 08:56:59.363927] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363932] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363937] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363941] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363945] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363950] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363954] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363958] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363963] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363967] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363971] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363976] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363980] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363984] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363988] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363993] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.363997] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364001] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364005] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364010] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364018] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364023] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364027] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364031] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364035] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364040] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364044] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364048] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364053] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364057] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364061] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364065] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:36.369 [2024-11-06 08:56:59.364069] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:36.369 [2024-11-06 08:56:59.364072] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:36.369 [2024-11-06 08:56:59.364085] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.369 [2024-11-06 08:56:59.364094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x180e00 00:20:36.634 [2024-11-06 08:56:59.369205] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369221] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369229] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:36.634 [2024-11-06 08:56:59.369235] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:36.634 [2024-11-06 08:56:59.369240] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:36.634 [2024-11-06 08:56:59.369249] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.634 [2024-11-06 08:56:59.369272] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369281] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:36.634 [2024-11-06 08:56:59.369285] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369290] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:36.634 [2024-11-06 08:56:59.369296] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.634 [2024-11-06 08:56:59.369321] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369330] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:36.634 [2024-11-06 08:56:59.369334] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369339] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:36.634 [2024-11-06 08:56:59.369345] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.634 [2024-11-06 08:56:59.369370] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369378] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:36.634 [2024-11-06 08:56:59.369383] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369389] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.634 [2024-11-06 08:56:59.369411] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369419] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:36.634 [2024-11-06 08:56:59.369423] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:36.634 [2024-11-06 08:56:59.369427] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369432] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:36.634 [2024-11-06 08:56:59.369537] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:36.634 [2024-11-06 08:56:59.369541] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:36.634 [2024-11-06 08:56:59.369547] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.634 [2024-11-06 08:56:59.369575] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369584] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:36.634 [2024-11-06 08:56:59.369588] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369594] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.634 [2024-11-06 08:56:59.369619] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369627] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:36.634 [2024-11-06 08:56:59.369631] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:36.634 [2024-11-06 08:56:59.369635] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369640] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:36.634 [2024-11-06 08:56:59.369646] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:36.634 [2024-11-06 08:56:59.369653] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180e00 00:20:36.634 [2024-11-06 08:56:59.369701] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369711] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:36.634 [2024-11-06 08:56:59.369715] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:36.634 [2024-11-06 08:56:59.369719] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:36.634 [2024-11-06 08:56:59.369723] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:36.634 [2024-11-06 08:56:59.369726] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:36.634 [2024-11-06 08:56:59.369731] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:36.634 [2024-11-06 08:56:59.369735] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369742] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:36.634 [2024-11-06 08:56:59.369749] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.634 [2024-11-06 08:56:59.369776] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369786] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.634 [2024-11-06 08:56:59.369796] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.634 [2024-11-06 08:56:59.369808] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.634 [2024-11-06 08:56:59.369818] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.634 [2024-11-06 08:56:59.369828] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:36.634 [2024-11-06 08:56:59.369831] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369839] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:36.634 [2024-11-06 08:56:59.369845] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.634 [2024-11-06 08:56:59.369850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.634 [2024-11-06 08:56:59.369869] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.634 [2024-11-06 08:56:59.369873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:36.634 [2024-11-06 08:56:59.369878] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:36.635 [2024-11-06 08:56:59.369882] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.369886] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.369891] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.369898] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.369904] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.369909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.635 [2024-11-06 08:56:59.369931] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.369936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.369985] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.369989] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.369995] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370002] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180e00 00:20:36.635 [2024-11-06 08:56:59.370033] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370049] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:36.635 [2024-11-06 08:56:59.370061] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370066] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370071] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370078] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180e00 00:20:36.635 [2024-11-06 08:56:59.370109] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370122] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370126] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370132] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370138] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180e00 00:20:36.635 [2024-11-06 08:56:59.370171] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370183] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370187] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370192] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370199] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370208] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370213] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370217] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370222] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:36.635 [2024-11-06 08:56:59.370226] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:36.635 [2024-11-06 08:56:59.370232] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:36.635 [2024-11-06 08:56:59.370243] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.635 [2024-11-06 08:56:59.370254] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.635 [2024-11-06 08:56:59.370270] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370279] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370285] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.635 [2024-11-06 08:56:59.370296] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370305] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370311] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370320] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370326] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.635 [2024-11-06 08:56:59.370352] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370360] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370367] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.635 [2024-11-06 08:56:59.370390] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370399] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370409] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180e00 00:20:36.635 [2024-11-06 08:56:59.370422] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x180e00 00:20:36.635 [2024-11-06 08:56:59.370435] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180e00 00:20:36.635 [2024-11-06 08:56:59.370447] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x180e00 00:20:36.635 [2024-11-06 08:56:59.370459] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370471] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370481] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370492] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370496] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:36.635 [2024-11-06 08:56:59.370506] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x180e00 00:20:36.635 [2024-11-06 08:56:59.370518] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.635 [2024-11-06 08:56:59.370522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:36.636 [2024-11-06 08:56:59.370529] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x180e00 00:20:36.636 ===================================================== 00:20:36.636 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.636 ===================================================== 00:20:36.636 Controller Capabilities/Features 00:20:36.636 ================================ 00:20:36.636 Vendor ID: 8086 00:20:36.636 Subsystem Vendor ID: 8086 00:20:36.636 Serial Number: SPDK00000000000001 00:20:36.636 Model Number: SPDK bdev Controller 00:20:36.636 Firmware Version: 25.01 00:20:36.636 Recommended Arb Burst: 6 00:20:36.636 IEEE OUI Identifier: e4 d2 5c 00:20:36.636 Multi-path I/O 00:20:36.636 May have multiple subsystem ports: Yes 00:20:36.636 May have multiple controllers: Yes 00:20:36.636 Associated with SR-IOV VF: No 00:20:36.636 Max Data Transfer Size: 131072 00:20:36.636 Max Number of Namespaces: 32 00:20:36.636 Max Number of I/O Queues: 127 00:20:36.636 NVMe Specification Version (VS): 1.3 00:20:36.636 NVMe Specification Version (Identify): 1.3 00:20:36.636 Maximum Queue Entries: 128 00:20:36.636 Contiguous Queues Required: Yes 00:20:36.636 Arbitration Mechanisms Supported 00:20:36.636 Weighted Round Robin: Not Supported 00:20:36.636 Vendor Specific: Not Supported 00:20:36.636 Reset Timeout: 15000 ms 00:20:36.636 Doorbell Stride: 4 bytes 00:20:36.636 NVM Subsystem Reset: Not Supported 00:20:36.636 Command Sets Supported 00:20:36.636 NVM Command Set: Supported 00:20:36.636 Boot Partition: Not Supported 00:20:36.636 Memory Page Size Minimum: 4096 bytes 00:20:36.636 Memory Page Size Maximum: 4096 bytes 00:20:36.636 Persistent Memory Region: Not Supported 00:20:36.636 Optional Asynchronous Events Supported 00:20:36.636 Namespace Attribute Notices: Supported 00:20:36.636 Firmware Activation Notices: Not Supported 00:20:36.636 ANA Change Notices: Not Supported 00:20:36.636 PLE Aggregate Log Change Notices: Not Supported 00:20:36.636 LBA Status Info Alert Notices: Not Supported 00:20:36.636 EGE Aggregate Log Change Notices: Not Supported 00:20:36.636 Normal NVM Subsystem Shutdown event: Not Supported 00:20:36.636 Zone Descriptor Change Notices: Not Supported 00:20:36.636 Discovery Log Change Notices: Not Supported 00:20:36.636 Controller Attributes 00:20:36.636 128-bit Host Identifier: Supported 00:20:36.636 Non-Operational Permissive Mode: Not Supported 00:20:36.636 NVM Sets: Not Supported 00:20:36.636 Read Recovery Levels: Not Supported 00:20:36.636 Endurance Groups: Not Supported 00:20:36.636 Predictable Latency Mode: Not Supported 00:20:36.636 Traffic Based Keep ALive: Not Supported 00:20:36.636 Namespace Granularity: Not Supported 00:20:36.636 SQ Associations: Not Supported 00:20:36.636 UUID List: Not Supported 00:20:36.636 Multi-Domain Subsystem: Not Supported 00:20:36.636 Fixed Capacity Management: Not Supported 00:20:36.636 Variable Capacity Management: Not Supported 00:20:36.636 Delete Endurance Group: Not Supported 00:20:36.636 Delete NVM Set: Not Supported 00:20:36.636 Extended LBA Formats Supported: Not Supported 00:20:36.636 Flexible Data Placement Supported: Not Supported 00:20:36.636 00:20:36.636 Controller Memory Buffer Support 00:20:36.636 ================================ 00:20:36.636 Supported: No 00:20:36.636 00:20:36.636 Persistent Memory Region Support 00:20:36.636 ================================ 00:20:36.636 Supported: No 00:20:36.636 00:20:36.636 Admin Command Set Attributes 00:20:36.636 ============================ 00:20:36.636 Security Send/Receive: Not Supported 00:20:36.636 Format NVM: Not Supported 00:20:36.636 Firmware Activate/Download: Not Supported 00:20:36.636 Namespace Management: Not Supported 00:20:36.636 Device Self-Test: Not Supported 00:20:36.636 Directives: Not Supported 00:20:36.636 NVMe-MI: Not Supported 00:20:36.636 Virtualization Management: Not Supported 00:20:36.636 Doorbell Buffer Config: Not Supported 00:20:36.636 Get LBA Status Capability: Not Supported 00:20:36.636 Command & Feature Lockdown Capability: Not Supported 00:20:36.636 Abort Command Limit: 4 00:20:36.636 Async Event Request Limit: 4 00:20:36.636 Number of Firmware Slots: N/A 00:20:36.636 Firmware Slot 1 Read-Only: N/A 00:20:36.636 Firmware Activation Without Reset: N/A 00:20:36.636 Multiple Update Detection Support: N/A 00:20:36.636 Firmware Update Granularity: No Information Provided 00:20:36.636 Per-Namespace SMART Log: No 00:20:36.636 Asymmetric Namespace Access Log Page: Not Supported 00:20:36.636 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:36.636 Command Effects Log Page: Supported 00:20:36.636 Get Log Page Extended Data: Supported 00:20:36.636 Telemetry Log Pages: Not Supported 00:20:36.636 Persistent Event Log Pages: Not Supported 00:20:36.636 Supported Log Pages Log Page: May Support 00:20:36.636 Commands Supported & Effects Log Page: Not Supported 00:20:36.636 Feature Identifiers & Effects Log Page:May Support 00:20:36.636 NVMe-MI Commands & Effects Log Page: May Support 00:20:36.636 Data Area 4 for Telemetry Log: Not Supported 00:20:36.636 Error Log Page Entries Supported: 128 00:20:36.636 Keep Alive: Supported 00:20:36.636 Keep Alive Granularity: 10000 ms 00:20:36.636 00:20:36.636 NVM Command Set Attributes 00:20:36.636 ========================== 00:20:36.636 Submission Queue Entry Size 00:20:36.636 Max: 64 00:20:36.636 Min: 64 00:20:36.636 Completion Queue Entry Size 00:20:36.636 Max: 16 00:20:36.636 Min: 16 00:20:36.636 Number of Namespaces: 32 00:20:36.636 Compare Command: Supported 00:20:36.636 Write Uncorrectable Command: Not Supported 00:20:36.636 Dataset Management Command: Supported 00:20:36.636 Write Zeroes Command: Supported 00:20:36.636 Set Features Save Field: Not Supported 00:20:36.636 Reservations: Supported 00:20:36.636 Timestamp: Not Supported 00:20:36.636 Copy: Supported 00:20:36.636 Volatile Write Cache: Present 00:20:36.636 Atomic Write Unit (Normal): 1 00:20:36.636 Atomic Write Unit (PFail): 1 00:20:36.636 Atomic Compare & Write Unit: 1 00:20:36.636 Fused Compare & Write: Supported 00:20:36.636 Scatter-Gather List 00:20:36.636 SGL Command Set: Supported 00:20:36.636 SGL Keyed: Supported 00:20:36.636 SGL Bit Bucket Descriptor: Not Supported 00:20:36.636 SGL Metadata Pointer: Not Supported 00:20:36.636 Oversized SGL: Not Supported 00:20:36.636 SGL Metadata Address: Not Supported 00:20:36.636 SGL Offset: Supported 00:20:36.636 Transport SGL Data Block: Not Supported 00:20:36.636 Replay Protected Memory Block: Not Supported 00:20:36.636 00:20:36.636 Firmware Slot Information 00:20:36.636 ========================= 00:20:36.636 Active slot: 1 00:20:36.636 Slot 1 Firmware Revision: 25.01 00:20:36.636 00:20:36.636 00:20:36.636 Commands Supported and Effects 00:20:36.636 ============================== 00:20:36.636 Admin Commands 00:20:36.636 -------------- 00:20:36.636 Get Log Page (02h): Supported 00:20:36.636 Identify (06h): Supported 00:20:36.636 Abort (08h): Supported 00:20:36.636 Set Features (09h): Supported 00:20:36.636 Get Features (0Ah): Supported 00:20:36.636 Asynchronous Event Request (0Ch): Supported 00:20:36.636 Keep Alive (18h): Supported 00:20:36.636 I/O Commands 00:20:36.636 ------------ 00:20:36.636 Flush (00h): Supported LBA-Change 00:20:36.636 Write (01h): Supported LBA-Change 00:20:36.636 Read (02h): Supported 00:20:36.636 Compare (05h): Supported 00:20:36.636 Write Zeroes (08h): Supported LBA-Change 00:20:36.636 Dataset Management (09h): Supported LBA-Change 00:20:36.636 Copy (19h): Supported LBA-Change 00:20:36.636 00:20:36.636 Error Log 00:20:36.636 ========= 00:20:36.636 00:20:36.636 Arbitration 00:20:36.636 =========== 00:20:36.636 Arbitration Burst: 1 00:20:36.636 00:20:36.636 Power Management 00:20:36.636 ================ 00:20:36.636 Number of Power States: 1 00:20:36.636 Current Power State: Power State #0 00:20:36.636 Power State #0: 00:20:36.636 Max Power: 0.00 W 00:20:36.636 Non-Operational State: Operational 00:20:36.636 Entry Latency: Not Reported 00:20:36.636 Exit Latency: Not Reported 00:20:36.636 Relative Read Throughput: 0 00:20:36.636 Relative Read Latency: 0 00:20:36.636 Relative Write Throughput: 0 00:20:36.636 Relative Write Latency: 0 00:20:36.636 Idle Power: Not Reported 00:20:36.636 Active Power: Not Reported 00:20:36.636 Non-Operational Permissive Mode: Not Supported 00:20:36.636 00:20:36.636 Health Information 00:20:36.636 ================== 00:20:36.636 Critical Warnings: 00:20:36.636 Available Spare Space: OK 00:20:36.636 Temperature: OK 00:20:36.636 Device Reliability: OK 00:20:36.636 Read Only: No 00:20:36.636 Volatile Memory Backup: OK 00:20:36.636 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:36.636 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:36.636 Available Spare: 0% 00:20:36.636 Available Spare Threshold: 0% 00:20:36.637 Life Percentage [2024-11-06 08:56:59.370605] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.370628] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.370632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370636] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370660] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:36.637 [2024-11-06 08:56:59.370667] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 29489 doesn't match qid 00:20:36.637 [2024-11-06 08:56:59.370679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32645 cdw0:5c1c02a0 sqhd:ec30 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370683] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 29489 doesn't match qid 00:20:36.637 [2024-11-06 08:56:59.370689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32645 cdw0:5c1c02a0 sqhd:ec30 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370693] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 29489 doesn't match qid 00:20:36.637 [2024-11-06 08:56:59.370700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32645 cdw0:5c1c02a0 sqhd:ec30 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370704] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 29489 doesn't match qid 00:20:36.637 [2024-11-06 08:56:59.370710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32645 cdw0:5c1c02a0 sqhd:ec30 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370717] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.370742] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.370746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370752] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.370762] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370779] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.370783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370788] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:36.637 [2024-11-06 08:56:59.370792] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:36.637 [2024-11-06 08:56:59.370796] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370803] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.370829] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.370833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370838] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370845] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.370867] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.370871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370876] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370883] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.370907] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.370911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370917] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370924] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.370950] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.370954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370958] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370965] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.370971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.370990] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.370995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.370999] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371006] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.371029] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.371034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.371038] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371045] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.371068] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.371072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.371076] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371083] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.371107] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.371112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.371116] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371123] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.371149] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.371153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.371158] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371165] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.371188] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.371192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.371196] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371208] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.371231] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.371235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.371239] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371246] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.371275] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.371279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.371283] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371290] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.637 [2024-11-06 08:56:59.371295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.637 [2024-11-06 08:56:59.371312] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.637 [2024-11-06 08:56:59.371316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:36.637 [2024-11-06 08:56:59.371321] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371327] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371348] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371357] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371363] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371384] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371394] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371401] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371422] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371430] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371437] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371462] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371471] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371477] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371500] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371508] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371515] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371540] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371549] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371555] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371579] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371588] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371594] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371618] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371628] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371635] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371657] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371665] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371672] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371698] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371706] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371713] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371735] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371743] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371750] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371774] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371782] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371789] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371814] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371823] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371829] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371850] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371858] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371865] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371889] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:36.638 [2024-11-06 08:56:59.371897] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371904] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.638 [2024-11-06 08:56:59.371909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.638 [2024-11-06 08:56:59.371926] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.638 [2024-11-06 08:56:59.371930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.371934] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.371941] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.371947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.371963] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.371968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.371972] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.371978] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.371984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372004] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372012] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372019] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372043] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372052] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372058] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372079] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372087] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372094] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372115] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372123] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372130] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372154] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372163] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372169] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372190] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372198] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372210] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372231] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372239] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372245] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372271] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372279] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372286] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372311] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372320] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372326] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372352] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372360] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372367] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372396] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372404] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372411] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372435] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372443] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372450] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372474] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372482] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372489] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372514] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372523] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372529] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372552] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372560] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372567] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372591] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372599] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372606] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.639 [2024-11-06 08:56:59.372612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.639 [2024-11-06 08:56:59.372632] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.639 [2024-11-06 08:56:59.372636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:36.639 [2024-11-06 08:56:59.372640] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372647] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.372667] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.372672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.372676] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372682] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.372710] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.372714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.372718] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372725] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.372751] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.372755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.372759] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372767] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.372788] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.372792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.372796] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372802] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.372824] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.372828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.372832] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372839] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.372866] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.372870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.372874] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372881] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.372905] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.372909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.372913] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372920] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.372945] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.372949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.372954] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372960] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.372966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.372988] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.372992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.372996] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373004] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.373025] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.373029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.373033] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373040] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.373065] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.373069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.373074] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373080] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.373101] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.373105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.373110] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373116] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.373137] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.373141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.373146] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373152] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.373175] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.373179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.373183] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373190] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.373196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.377205] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.377211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.377219] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.377226] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.377232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:36.640 [2024-11-06 08:56:59.377253] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:36.640 [2024-11-06 08:56:59.377257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0019 p:0 m:0 dnr:0 00:20:36.640 [2024-11-06 08:56:59.377261] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x180e00 00:20:36.640 [2024-11-06 08:56:59.377266] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:20:36.640 Used: 0% 00:20:36.640 Data Units Read: 0 00:20:36.640 Data Units Written: 0 00:20:36.640 Host Read Commands: 0 00:20:36.640 Host Write Commands: 0 00:20:36.640 Controller Busy Time: 0 minutes 00:20:36.640 Power Cycles: 0 00:20:36.640 Power On Hours: 0 hours 00:20:36.640 Unsafe Shutdowns: 0 00:20:36.640 Unrecoverable Media Errors: 0 00:20:36.640 Lifetime Error Log Entries: 0 00:20:36.640 Warning Temperature Time: 0 minutes 00:20:36.640 Critical Temperature Time: 0 minutes 00:20:36.640 00:20:36.640 Number of Queues 00:20:36.640 ================ 00:20:36.640 Number of I/O Submission Queues: 127 00:20:36.640 Number of I/O Completion Queues: 127 00:20:36.640 00:20:36.640 Active Namespaces 00:20:36.640 ================= 00:20:36.640 Namespace ID:1 00:20:36.640 Error Recovery Timeout: Unlimited 00:20:36.640 Command Set Identifier: NVM (00h) 00:20:36.640 Deallocate: Supported 00:20:36.640 Deallocated/Unwritten Error: Not Supported 00:20:36.641 Deallocated Read Value: Unknown 00:20:36.641 Deallocate in Write Zeroes: Not Supported 00:20:36.641 Deallocated Guard Field: 0xFFFF 00:20:36.641 Flush: Supported 00:20:36.641 Reservation: Supported 00:20:36.641 Namespace Sharing Capabilities: Multiple Controllers 00:20:36.641 Size (in LBAs): 131072 (0GiB) 00:20:36.641 Capacity (in LBAs): 131072 (0GiB) 00:20:36.641 Utilization (in LBAs): 131072 (0GiB) 00:20:36.641 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:36.641 EUI64: ABCDEF0123456789 00:20:36.641 UUID: d0ff3284-7ef5-431e-b160-5fff3de4be16 00:20:36.641 Thin Provisioning: Not Supported 00:20:36.641 Per-NS Atomic Units: Yes 00:20:36.641 Atomic Boundary Size (Normal): 0 00:20:36.641 Atomic Boundary Size (PFail): 0 00:20:36.641 Atomic Boundary Offset: 0 00:20:36.641 Maximum Single Source Range Length: 65535 00:20:36.641 Maximum Copy Length: 65535 00:20:36.641 Maximum Source Range Count: 1 00:20:36.641 NGUID/EUI64 Never Reused: No 00:20:36.641 Namespace Write Protected: No 00:20:36.641 Number of LBA Formats: 1 00:20:36.641 Current LBA Format: LBA Format #00 00:20:36.641 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:36.641 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:36.641 rmmod nvme_rdma 00:20:36.641 rmmod nvme_fabrics 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 488179 ']' 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 488179 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 488179 ']' 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 488179 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 488179 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 488179' 00:20:36.641 killing process with pid 488179 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 488179 00:20:36.641 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 488179 00:20:36.900 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:36.900 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:20:36.900 00:20:36.900 real 0m7.395s 00:20:36.900 user 0m6.020s 00:20:36.900 sys 0m4.836s 00:20:36.900 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.900 08:56:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.900 ************************************ 00:20:36.900 END TEST nvmf_identify 00:20:36.900 ************************************ 00:20:36.900 08:56:59 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:36.900 08:56:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:36.900 08:56:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:36.900 08:56:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.900 ************************************ 00:20:36.900 START TEST nvmf_perf 00:20:36.900 ************************************ 00:20:36.900 08:56:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:37.160 * Looking for test storage... 00:20:37.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:37.160 08:56:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:20:37.160 08:56:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # lcov --version 00:20:37.160 08:56:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:20:37.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.160 --rc genhtml_branch_coverage=1 00:20:37.160 --rc genhtml_function_coverage=1 00:20:37.160 --rc genhtml_legend=1 00:20:37.160 --rc geninfo_all_blocks=1 00:20:37.160 --rc geninfo_unexecuted_blocks=1 00:20:37.160 00:20:37.160 ' 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:20:37.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.160 --rc genhtml_branch_coverage=1 00:20:37.160 --rc genhtml_function_coverage=1 00:20:37.160 --rc genhtml_legend=1 00:20:37.160 --rc geninfo_all_blocks=1 00:20:37.160 --rc geninfo_unexecuted_blocks=1 00:20:37.160 00:20:37.160 ' 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:20:37.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.160 --rc genhtml_branch_coverage=1 00:20:37.160 --rc genhtml_function_coverage=1 00:20:37.160 --rc genhtml_legend=1 00:20:37.160 --rc geninfo_all_blocks=1 00:20:37.160 --rc geninfo_unexecuted_blocks=1 00:20:37.160 00:20:37.160 ' 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:20:37.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.160 --rc genhtml_branch_coverage=1 00:20:37.160 --rc genhtml_function_coverage=1 00:20:37.160 --rc genhtml_legend=1 00:20:37.160 --rc geninfo_all_blocks=1 00:20:37.160 --rc geninfo_unexecuted_blocks=1 00:20:37.160 00:20:37.160 ' 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:37.160 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:37.161 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:37.161 08:57:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:43.732 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:43.732 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:43.732 Found net devices under 0000:da:00.0: mlx_0_0 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:43.732 Found net devices under 0000:da:00.1: mlx_0_1 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # rdma_device_init 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@528 -- # allocate_nic_ips 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:20:43.732 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:43.733 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:43.733 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:20:43.733 altname enp218s0f0np0 00:20:43.733 altname ens818f0np0 00:20:43.733 inet 192.168.100.8/24 scope global mlx_0_0 00:20:43.733 valid_lft forever preferred_lft forever 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:43.733 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:43.733 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:20:43.733 altname enp218s0f1np1 00:20:43.733 altname ens818f1np1 00:20:43.733 inet 192.168.100.9/24 scope global mlx_0_1 00:20:43.733 valid_lft forever preferred_lft forever 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:20:43.733 192.168.100.9' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:20:43.733 192.168.100.9' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # head -n 1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:20:43.733 192.168.100.9' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # tail -n +2 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # head -n 1 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=491622 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 491622 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 491622 ']' 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.733 08:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.733 [2024-11-06 08:57:05.993991] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:43.733 [2024-11-06 08:57:05.994038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.733 [2024-11-06 08:57:06.069501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.733 [2024-11-06 08:57:06.112824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.733 [2024-11-06 08:57:06.112862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.733 [2024-11-06 08:57:06.112869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.733 [2024-11-06 08:57:06.112876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.733 [2024-11-06 08:57:06.112882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.733 [2024-11-06 08:57:06.114443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.733 [2024-11-06 08:57:06.114555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.733 [2024-11-06 08:57:06.114686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.733 [2024-11-06 08:57:06.114687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.733 08:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.733 08:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:43.733 08:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:43.733 08:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.733 08:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.733 08:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.733 08:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:43.733 08:57:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:47.024 08:57:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:47.024 08:57:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:47.024 08:57:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:20:47.024 08:57:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:47.024 08:57:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:47.024 08:57:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:20:47.024 08:57:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:47.024 08:57:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:20:47.024 08:57:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:20:47.024 [2024-11-06 08:57:09.921591] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:20:47.024 [2024-11-06 08:57:09.941395] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe1b7c0/0xd70e80) succeed. 00:20:47.024 [2024-11-06 08:57:09.950612] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcdb2b0/0xcf0e20) succeed. 00:20:47.283 08:57:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:47.283 08:57:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:47.283 08:57:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:47.542 08:57:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:47.542 08:57:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:47.843 08:57:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:48.103 [2024-11-06 08:57:10.872468] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:48.103 08:57:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:48.103 08:57:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:20:48.103 08:57:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:20:48.103 08:57:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:48.103 08:57:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:20:49.483 Initializing NVMe Controllers 00:20:49.483 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:20:49.483 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:20:49.483 Initialization complete. Launching workers. 00:20:49.483 ======================================================== 00:20:49.483 Latency(us) 00:20:49.483 Device Information : IOPS MiB/s Average min max 00:20:49.483 PCIE (0000:5e:00.0) NSID 1 from core 0: 98319.59 384.06 324.96 29.69 9460.33 00:20:49.483 ======================================================== 00:20:49.483 Total : 98319.59 384.06 324.96 29.69 9460.33 00:20:49.483 00:20:49.483 08:57:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:52.772 Initializing NVMe Controllers 00:20:52.772 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.772 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.772 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:52.772 Initialization complete. Launching workers. 00:20:52.772 ======================================================== 00:20:52.773 Latency(us) 00:20:52.773 Device Information : IOPS MiB/s Average min max 00:20:52.773 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6550.71 25.59 151.69 47.92 8071.43 00:20:52.773 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5108.84 19.96 195.35 66.98 8046.00 00:20:52.773 ======================================================== 00:20:52.773 Total : 11659.55 45.55 170.82 47.92 8071.43 00:20:52.773 00:20:52.773 08:57:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:56.061 Initializing NVMe Controllers 00:20:56.061 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.061 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:56.061 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:56.061 Initialization complete. Launching workers. 00:20:56.061 ======================================================== 00:20:56.061 Latency(us) 00:20:56.061 Device Information : IOPS MiB/s Average min max 00:20:56.061 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18106.98 70.73 1766.97 493.09 8105.36 00:20:56.061 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3968.00 15.50 8112.30 7735.44 14466.09 00:20:56.061 ======================================================== 00:20:56.061 Total : 22074.98 86.23 2907.54 493.09 14466.09 00:20:56.061 00:20:56.321 08:57:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:20:56.321 08:57:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:00.523 Initializing NVMe Controllers 00:21:00.523 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.523 Controller IO queue size 128, less than required. 00:21:00.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.523 Controller IO queue size 128, less than required. 00:21:00.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.523 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:00.523 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:00.523 Initialization complete. Launching workers. 00:21:00.523 ======================================================== 00:21:00.523 Latency(us) 00:21:00.523 Device Information : IOPS MiB/s Average min max 00:21:00.523 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3818.00 954.50 33804.89 15175.55 88485.14 00:21:00.523 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3955.50 988.88 31905.08 14957.69 56466.68 00:21:00.523 ======================================================== 00:21:00.523 Total : 7773.50 1943.38 32838.18 14957.69 88485.14 00:21:00.523 00:21:00.523 08:57:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:21:01.091 No valid NVMe controllers or AIO or URING devices found 00:21:01.091 Initializing NVMe Controllers 00:21:01.091 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:01.091 Controller IO queue size 128, less than required. 00:21:01.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.091 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:01.091 Controller IO queue size 128, less than required. 00:21:01.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.091 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:01.091 WARNING: Some requested NVMe devices were skipped 00:21:01.092 08:57:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:21:05.285 Initializing NVMe Controllers 00:21:05.285 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.285 Controller IO queue size 128, less than required. 00:21:05.285 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.285 Controller IO queue size 128, less than required. 00:21:05.285 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.285 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.285 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:05.285 Initialization complete. Launching workers. 00:21:05.285 00:21:05.285 ==================== 00:21:05.285 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:05.285 RDMA transport: 00:21:05.285 dev name: mlx5_0 00:21:05.285 polls: 398502 00:21:05.285 idle_polls: 394974 00:21:05.285 completions: 42662 00:21:05.285 queued_requests: 1 00:21:05.285 total_send_wrs: 21331 00:21:05.285 send_doorbell_updates: 3266 00:21:05.285 total_recv_wrs: 21458 00:21:05.285 recv_doorbell_updates: 3267 00:21:05.285 --------------------------------- 00:21:05.285 00:21:05.285 ==================== 00:21:05.285 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:05.285 RDMA transport: 00:21:05.285 dev name: mlx5_0 00:21:05.285 polls: 399060 00:21:05.285 idle_polls: 398792 00:21:05.285 completions: 19778 00:21:05.285 queued_requests: 1 00:21:05.285 total_send_wrs: 9889 00:21:05.285 send_doorbell_updates: 252 00:21:05.285 total_recv_wrs: 10016 00:21:05.285 recv_doorbell_updates: 253 00:21:05.285 --------------------------------- 00:21:05.285 ======================================================== 00:21:05.285 Latency(us) 00:21:05.285 Device Information : IOPS MiB/s Average min max 00:21:05.285 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5323.94 1330.99 24036.25 11488.40 75185.49 00:21:05.285 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2468.03 617.01 51584.54 30866.39 76090.71 00:21:05.285 ======================================================== 00:21:05.285 Total : 7791.97 1947.99 32761.91 11488.40 76090.71 00:21:05.285 00:21:05.285 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:05.617 rmmod nvme_rdma 00:21:05.617 rmmod nvme_fabrics 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 491622 ']' 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 491622 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 491622 ']' 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 491622 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:05.617 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 491622 00:21:05.940 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:05.940 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:05.940 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 491622' 00:21:05.940 killing process with pid 491622 00:21:05.940 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 491622 00:21:05.940 08:57:28 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 491622 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:21:07.851 00:21:07.851 real 0m30.730s 00:21:07.851 user 1m39.997s 00:21:07.851 sys 0m5.768s 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:07.851 ************************************ 00:21:07.851 END TEST nvmf_perf 00:21:07.851 ************************************ 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.851 ************************************ 00:21:07.851 START TEST nvmf_fio_host 00:21:07.851 ************************************ 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:07.851 * Looking for test storage... 00:21:07.851 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # lcov --version 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:21:07.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.851 --rc genhtml_branch_coverage=1 00:21:07.851 --rc genhtml_function_coverage=1 00:21:07.851 --rc genhtml_legend=1 00:21:07.851 --rc geninfo_all_blocks=1 00:21:07.851 --rc geninfo_unexecuted_blocks=1 00:21:07.851 00:21:07.851 ' 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:21:07.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.851 --rc genhtml_branch_coverage=1 00:21:07.851 --rc genhtml_function_coverage=1 00:21:07.851 --rc genhtml_legend=1 00:21:07.851 --rc geninfo_all_blocks=1 00:21:07.851 --rc geninfo_unexecuted_blocks=1 00:21:07.851 00:21:07.851 ' 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:21:07.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.851 --rc genhtml_branch_coverage=1 00:21:07.851 --rc genhtml_function_coverage=1 00:21:07.851 --rc genhtml_legend=1 00:21:07.851 --rc geninfo_all_blocks=1 00:21:07.851 --rc geninfo_unexecuted_blocks=1 00:21:07.851 00:21:07.851 ' 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:21:07.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.851 --rc genhtml_branch_coverage=1 00:21:07.851 --rc genhtml_function_coverage=1 00:21:07.851 --rc genhtml_legend=1 00:21:07.851 --rc geninfo_all_blocks=1 00:21:07.851 --rc geninfo_unexecuted_blocks=1 00:21:07.851 00:21:07.851 ' 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.851 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.852 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:08.112 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.112 08:57:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:14.686 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:14.686 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:14.686 Found net devices under 0000:da:00.0: mlx_0_0 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:14.686 Found net devices under 0000:da:00.1: mlx_0_1 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # rdma_device_init 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@528 -- # allocate_nic_ips 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:14.686 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:14.687 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:14.687 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:21:14.687 altname enp218s0f0np0 00:21:14.687 altname ens818f0np0 00:21:14.687 inet 192.168.100.8/24 scope global mlx_0_0 00:21:14.687 valid_lft forever preferred_lft forever 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:14.687 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:14.687 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:21:14.687 altname enp218s0f1np1 00:21:14.687 altname ens818f1np1 00:21:14.687 inet 192.168.100.9/24 scope global mlx_0_1 00:21:14.687 valid_lft forever preferred_lft forever 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:21:14.687 192.168.100.9' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:21:14.687 192.168.100.9' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # head -n 1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:21:14.687 192.168.100.9' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # tail -n +2 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # head -n 1 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=499268 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 499268 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 499268 ']' 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.687 08:57:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.687 [2024-11-06 08:57:36.817500] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:21:14.687 [2024-11-06 08:57:36.817554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.687 [2024-11-06 08:57:36.893738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.687 [2024-11-06 08:57:36.936811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.687 [2024-11-06 08:57:36.936846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.687 [2024-11-06 08:57:36.936853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.687 [2024-11-06 08:57:36.936859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.687 [2024-11-06 08:57:36.936864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.687 [2024-11-06 08:57:36.938411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.687 [2024-11-06 08:57:36.938525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.687 [2024-11-06 08:57:36.938630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.687 [2024-11-06 08:57:36.938631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.687 08:57:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.687 08:57:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:21:14.687 08:57:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:14.687 [2024-11-06 08:57:37.222102] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23b7da0/0x23bc290) succeed. 00:21:14.687 [2024-11-06 08:57:37.231148] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23b9430/0x23fd930) succeed. 00:21:14.687 08:57:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:14.687 08:57:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:14.687 08:57:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.687 08:57:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:14.687 Malloc1 00:21:14.687 08:57:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:14.947 08:57:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:15.206 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:15.206 [2024-11-06 08:57:38.205496] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:15.464 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:15.745 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:15.745 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:15.745 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:15.745 08:57:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:16.002 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:16.002 fio-3.35 00:21:16.002 Starting 1 thread 00:21:18.528 00:21:18.528 test: (groupid=0, jobs=1): err= 0: pid=499858: Wed Nov 6 08:57:41 2024 00:21:18.528 read: IOPS=17.4k, BW=68.1MiB/s (71.4MB/s)(136MiB/2004msec) 00:21:18.528 slat (nsec): min=1375, max=26789, avg=1548.63, stdev=518.14 00:21:18.528 clat (usec): min=1848, max=6686, avg=3645.82, stdev=93.18 00:21:18.528 lat (usec): min=1866, max=6687, avg=3647.37, stdev=93.10 00:21:18.528 clat percentiles (usec): 00:21:18.528 | 1.00th=[ 3589], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:21:18.528 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3654], 60.00th=[ 3654], 00:21:18.528 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3654], 00:21:18.528 | 99.00th=[ 3851], 99.50th=[ 4047], 99.90th=[ 4817], 99.95th=[ 5735], 00:21:18.528 | 99.99th=[ 6652] 00:21:18.528 bw ( KiB/s): min=68544, max=70320, per=100.00%, avg=69746.00, stdev=813.02, samples=4 00:21:18.528 iops : min=17136, max=17580, avg=17436.50, stdev=203.26, samples=4 00:21:18.528 write: IOPS=17.5k, BW=68.2MiB/s (71.5MB/s)(137MiB/2004msec); 0 zone resets 00:21:18.528 slat (nsec): min=1416, max=22054, avg=1605.70, stdev=485.38 00:21:18.528 clat (usec): min=1877, max=6667, avg=3644.06, stdev=90.02 00:21:18.528 lat (usec): min=1886, max=6669, avg=3645.67, stdev=89.93 00:21:18.528 clat percentiles (usec): 00:21:18.528 | 1.00th=[ 3589], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:21:18.528 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3654], 00:21:18.528 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3654], 00:21:18.528 | 99.00th=[ 3818], 99.50th=[ 4015], 99.90th=[ 4817], 99.95th=[ 5735], 00:21:18.528 | 99.99th=[ 6259] 00:21:18.528 bw ( KiB/s): min=68664, max=70424, per=100.00%, avg=69838.00, stdev=806.99, samples=4 00:21:18.528 iops : min=17166, max=17606, avg=17459.50, stdev=201.75, samples=4 00:21:18.528 lat (msec) : 2=0.01%, 4=99.44%, 10=0.55% 00:21:18.528 cpu : usr=99.55%, sys=0.05%, ctx=17, majf=0, minf=2 00:21:18.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:18.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:18.528 issued rwts: total=34943,34976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:18.528 00:21:18.528 Run status group 0 (all jobs): 00:21:18.528 READ: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=136MiB (143MB), run=2004-2004msec 00:21:18.528 WRITE: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=137MiB (143MB), run=2004-2004msec 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:18.528 08:57:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:18.528 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:18.528 fio-3.35 00:21:18.528 Starting 1 thread 00:21:21.054 00:21:21.054 test: (groupid=0, jobs=1): err= 0: pid=500352: Wed Nov 6 08:57:43 2024 00:21:21.054 read: IOPS=14.0k, BW=219MiB/s (229MB/s)(433MiB/1977msec) 00:21:21.054 slat (nsec): min=2283, max=45668, avg=2601.98, stdev=1001.05 00:21:21.054 clat (usec): min=502, max=8670, avg=1587.03, stdev=1247.19 00:21:21.054 lat (usec): min=505, max=8700, avg=1589.63, stdev=1247.55 00:21:21.054 clat percentiles (usec): 00:21:21.054 | 1.00th=[ 701], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 938], 00:21:21.054 | 30.00th=[ 1012], 40.00th=[ 1090], 50.00th=[ 1188], 60.00th=[ 1287], 00:21:21.054 | 70.00th=[ 1418], 80.00th=[ 1582], 90.00th=[ 3130], 95.00th=[ 5080], 00:21:21.054 | 99.00th=[ 6390], 99.50th=[ 6980], 99.90th=[ 7570], 99.95th=[ 7767], 00:21:21.054 | 99.99th=[ 8586] 00:21:21.054 bw ( KiB/s): min=108992, max=111776, per=49.32%, avg=110512.00, stdev=1199.18, samples=4 00:21:21.054 iops : min= 6812, max= 6986, avg=6907.00, stdev=74.95, samples=4 00:21:21.054 write: IOPS=7876, BW=123MiB/s (129MB/s)(224MiB/1822msec); 0 zone resets 00:21:21.054 slat (usec): min=26, max=153, avg=29.19, stdev= 6.07 00:21:21.054 clat (usec): min=4590, max=20940, avg=13146.14, stdev=1832.05 00:21:21.054 lat (usec): min=4617, max=20967, avg=13175.34, stdev=1831.48 00:21:21.054 clat percentiles (usec): 00:21:21.054 | 1.00th=[ 7308], 5.00th=[10421], 10.00th=[11076], 20.00th=[11731], 00:21:21.054 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[13566], 00:21:21.054 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15401], 95.00th=[16057], 00:21:21.054 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[20317], 00:21:21.054 | 99.99th=[20841] 00:21:21.054 bw ( KiB/s): min=110208, max=116256, per=90.43%, avg=113968.00, stdev=2618.60, samples=4 00:21:21.054 iops : min= 6888, max= 7266, avg=7123.00, stdev=163.66, samples=4 00:21:21.054 lat (usec) : 750=1.56%, 1000=17.37% 00:21:21.054 lat (msec) : 2=39.19%, 4=2.27%, 10=6.71%, 20=32.88%, 50=0.02% 00:21:21.054 cpu : usr=96.91%, sys=1.45%, ctx=185, majf=0, minf=2 00:21:21.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:21.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:21.054 issued rwts: total=27687,14351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:21.054 00:21:21.054 Run status group 0 (all jobs): 00:21:21.054 READ: bw=219MiB/s (229MB/s), 219MiB/s-219MiB/s (229MB/s-229MB/s), io=433MiB (454MB), run=1977-1977msec 00:21:21.054 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=224MiB (235MB), run=1822-1822msec 00:21:21.054 08:57:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.054 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:21.054 rmmod nvme_rdma 00:21:21.054 rmmod nvme_fabrics 00:21:21.055 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.055 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:21.055 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:21.055 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 499268 ']' 00:21:21.055 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 499268 00:21:21.055 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 499268 ']' 00:21:21.055 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 499268 00:21:21.313 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:21:21.313 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:21.313 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 499268 00:21:21.313 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:21.313 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:21.313 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 499268' 00:21:21.313 killing process with pid 499268 00:21:21.313 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 499268 00:21:21.313 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 499268 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:21:21.573 00:21:21.573 real 0m13.703s 00:21:21.573 user 0m47.987s 00:21:21.573 sys 0m5.370s 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.573 ************************************ 00:21:21.573 END TEST nvmf_fio_host 00:21:21.573 ************************************ 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.573 ************************************ 00:21:21.573 START TEST nvmf_failover 00:21:21.573 ************************************ 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:21.573 * Looking for test storage... 00:21:21.573 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # lcov --version 00:21:21.573 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:21:21.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.833 --rc genhtml_branch_coverage=1 00:21:21.833 --rc genhtml_function_coverage=1 00:21:21.833 --rc genhtml_legend=1 00:21:21.833 --rc geninfo_all_blocks=1 00:21:21.833 --rc geninfo_unexecuted_blocks=1 00:21:21.833 00:21:21.833 ' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:21:21.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.833 --rc genhtml_branch_coverage=1 00:21:21.833 --rc genhtml_function_coverage=1 00:21:21.833 --rc genhtml_legend=1 00:21:21.833 --rc geninfo_all_blocks=1 00:21:21.833 --rc geninfo_unexecuted_blocks=1 00:21:21.833 00:21:21.833 ' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:21:21.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.833 --rc genhtml_branch_coverage=1 00:21:21.833 --rc genhtml_function_coverage=1 00:21:21.833 --rc genhtml_legend=1 00:21:21.833 --rc geninfo_all_blocks=1 00:21:21.833 --rc geninfo_unexecuted_blocks=1 00:21:21.833 00:21:21.833 ' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:21:21.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.833 --rc genhtml_branch_coverage=1 00:21:21.833 --rc genhtml_function_coverage=1 00:21:21.833 --rc genhtml_legend=1 00:21:21.833 --rc geninfo_all_blocks=1 00:21:21.833 --rc geninfo_unexecuted_blocks=1 00:21:21.833 00:21:21.833 ' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.833 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:21.833 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:21.834 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:21.834 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.834 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.834 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.834 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:21.834 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:21.834 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.834 08:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:28.406 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:28.406 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:28.406 Found net devices under 0000:da:00.0: mlx_0_0 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:28.406 Found net devices under 0000:da:00.1: mlx_0_1 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # rdma_device_init 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@528 -- # allocate_nic_ips 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:28.406 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:28.407 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:28.407 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:21:28.407 altname enp218s0f0np0 00:21:28.407 altname ens818f0np0 00:21:28.407 inet 192.168.100.8/24 scope global mlx_0_0 00:21:28.407 valid_lft forever preferred_lft forever 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:28.407 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:28.407 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:21:28.407 altname enp218s0f1np1 00:21:28.407 altname ens818f1np1 00:21:28.407 inet 192.168.100.9/24 scope global mlx_0_1 00:21:28.407 valid_lft forever preferred_lft forever 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:21:28.407 192.168.100.9' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:21:28.407 192.168.100.9' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # head -n 1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:21:28.407 192.168.100.9' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # tail -n +2 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # head -n 1 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=503958 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 503958 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 503958 ']' 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:28.407 [2024-11-06 08:57:50.575243] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:21:28.407 [2024-11-06 08:57:50.575302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.407 [2024-11-06 08:57:50.652315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:28.407 [2024-11-06 08:57:50.692231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.407 [2024-11-06 08:57:50.692269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.407 [2024-11-06 08:57:50.692276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.407 [2024-11-06 08:57:50.692285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.407 [2024-11-06 08:57:50.692290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.407 [2024-11-06 08:57:50.693693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.407 [2024-11-06 08:57:50.693801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.407 [2024-11-06 08:57:50.693802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.407 08:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:28.407 [2024-11-06 08:57:51.019541] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d98530/0x1d9ca20) succeed. 00:21:28.407 [2024-11-06 08:57:51.028385] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d99b20/0x1dde0c0) succeed. 00:21:28.407 08:57:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:28.407 Malloc0 00:21:28.407 08:57:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.679 08:57:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.939 08:57:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:28.939 [2024-11-06 08:57:51.951346] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:29.197 08:57:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:29.197 [2024-11-06 08:57:52.135684] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:29.197 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:29.455 [2024-11-06 08:57:52.320333] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=504217 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 504217 /var/tmp/bdevperf.sock 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 504217 ']' 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.455 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:29.712 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:29.712 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:29.712 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:29.970 NVMe0n1 00:21:29.970 08:57:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:30.227 00:21:30.227 08:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=504388 00:21:30.227 08:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.227 08:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:31.159 08:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:31.416 08:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:34.692 08:57:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:34.692 00:21:34.692 08:57:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:34.949 08:57:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:38.229 08:58:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:38.229 [2024-11-06 08:58:00.984274] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:38.229 08:58:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:39.161 08:58:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:39.418 08:58:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 504388 00:21:45.979 { 00:21:45.979 "results": [ 00:21:45.979 { 00:21:45.979 "job": "NVMe0n1", 00:21:45.979 "core_mask": "0x1", 00:21:45.979 "workload": "verify", 00:21:45.979 "status": "finished", 00:21:45.979 "verify_range": { 00:21:45.979 "start": 0, 00:21:45.979 "length": 16384 00:21:45.979 }, 00:21:45.979 "queue_depth": 128, 00:21:45.979 "io_size": 4096, 00:21:45.979 "runtime": 15.004657, 00:21:45.979 "iops": 14062.300791014417, 00:21:45.979 "mibps": 54.930862464900066, 00:21:45.979 "io_failed": 4445, 00:21:45.979 "io_timeout": 0, 00:21:45.979 "avg_latency_us": 8892.710307388139, 00:21:45.979 "min_latency_us": 358.88761904761907, 00:21:45.979 "max_latency_us": 1046578.7123809524 00:21:45.979 } 00:21:45.979 ], 00:21:45.979 "core_count": 1 00:21:45.979 } 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 504217 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 504217 ']' 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 504217 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 504217 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 504217' 00:21:45.979 killing process with pid 504217 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 504217 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 504217 00:21:45.979 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:45.979 [2024-11-06 08:57:52.381723] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:21:45.979 [2024-11-06 08:57:52.381775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504217 ] 00:21:45.979 [2024-11-06 08:57:52.454112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.979 [2024-11-06 08:57:52.495385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.979 Running I/O for 15 seconds... 00:21:45.979 17792.00 IOPS, 69.50 MiB/s [2024-11-06T07:58:08.993Z] 9605.50 IOPS, 37.52 MiB/s [2024-11-06T07:58:08.993Z] [2024-11-06 08:57:55.313645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x181d00 00:21:45.979 [2024-11-06 08:57:55.313681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.979 [2024-11-06 08:57:55.313696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x181d00 00:21:45.979 [2024-11-06 08:57:55.313704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.979 [2024-11-06 08:57:55.313713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x181d00 00:21:45.979 [2024-11-06 08:57:55.313720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.979 [2024-11-06 08:57:55.313728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181d00 00:21:45.979 [2024-11-06 08:57:55.313735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.979 [2024-11-06 08:57:55.313743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.313986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.313995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.980 [2024-11-06 08:57:55.314307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181d00 00:21:45.980 [2024-11-06 08:57:55.314313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.981 [2024-11-06 08:57:55.314839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181d00 00:21:45.981 [2024-11-06 08:57:55.314845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.314987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.314993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.982 [2024-11-06 08:57:55.315396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x181d00 00:21:45.982 [2024-11-06 08:57:55.315403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x181d00 00:21:45.983 [2024-11-06 08:57:55.323814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x181d00 00:21:45.983 [2024-11-06 08:57:55.323830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x181d00 00:21:45.983 [2024-11-06 08:57:55.323845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.323992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:55.323998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.325901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.983 [2024-11-06 08:57:55.325914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.983 [2024-11-06 08:57:55.325920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23632 len:8 PRP1 0x0 PRP2 0x0 00:21:45.983 [2024-11-06 08:57:55.325927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.325968] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:45.983 [2024-11-06 08:57:55.325978] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:45.983 [2024-11-06 08:57:55.326014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.983 [2024-11-06 08:57:55.326024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:1db32d0 sqhd:1bb0 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.326032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.983 [2024-11-06 08:57:55.326038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:1db32d0 sqhd:1bb0 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.326046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.983 [2024-11-06 08:57:55.326053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:1db32d0 sqhd:1bb0 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.326060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.983 [2024-11-06 08:57:55.326066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:1db32d0 sqhd:1bb0 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:55.343176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:21:45.983 [2024-11-06 08:57:55.343192] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:21:45.983 [2024-11-06 08:57:55.343204] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:45.983 [2024-11-06 08:57:55.346021] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:45.983 [2024-11-06 08:57:55.390915] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:45.983 11347.67 IOPS, 44.33 MiB/s [2024-11-06T07:58:08.997Z] 12947.25 IOPS, 50.58 MiB/s [2024-11-06T07:58:08.997Z] 12273.00 IOPS, 47.94 MiB/s [2024-11-06T07:58:08.997Z] [2024-11-06 08:57:58.795957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182800 00:21:45.983 [2024-11-06 08:57:58.795991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182800 00:21:45.983 [2024-11-06 08:57:58.796016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182800 00:21:45.983 [2024-11-06 08:57:58.796033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182800 00:21:45.983 [2024-11-06 08:57:58.796048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182800 00:21:45.983 [2024-11-06 08:57:58.796064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182800 00:21:45.983 [2024-11-06 08:57:58.796080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:58.796095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:58.796113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:58.796128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:58.796143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:58.796160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:58.796183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:58.796199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.983 [2024-11-06 08:57:58.796217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.983 [2024-11-06 08:57:58.796226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.984 [2024-11-06 08:57:58.796492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.984 [2024-11-06 08:57:58.796507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.984 [2024-11-06 08:57:58.796521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.984 [2024-11-06 08:57:58.796535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.984 [2024-11-06 08:57:58.796550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.984 [2024-11-06 08:57:58.796565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.984 [2024-11-06 08:57:58.796580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.984 [2024-11-06 08:57:58.796594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.984 [2024-11-06 08:57:58.796708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182800 00:21:45.984 [2024-11-06 08:57:58.796715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.796850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.796865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.796880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.796895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.796912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.796928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.796944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.796959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.796988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.796996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.797093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.797108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.797122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.797139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.797154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.797169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.797183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182800 00:21:45.985 [2024-11-06 08:57:58.797199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.985 [2024-11-06 08:57:58.797288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.985 [2024-11-06 08:57:58.797294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.986 [2024-11-06 08:57:58.797804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182800 00:21:45.986 [2024-11-06 08:57:58.797865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.986 [2024-11-06 08:57:58.797873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182800 00:21:45.987 [2024-11-06 08:57:58.797881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:57:58.797891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182800 00:21:45.987 [2024-11-06 08:57:58.797897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:57:58.797907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182800 00:21:45.987 [2024-11-06 08:57:58.797915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:57:58.797923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182800 00:21:45.987 [2024-11-06 08:57:58.797930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:57:58.797937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.987 [2024-11-06 08:57:58.797943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:57:58.799662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.987 [2024-11-06 08:57:58.799675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.987 [2024-11-06 08:57:58.799682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110808 len:8 PRP1 0x0 PRP2 0x0 00:21:45.987 [2024-11-06 08:57:58.799689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:57:58.799728] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:21:45.987 [2024-11-06 08:57:58.799738] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:45.987 [2024-11-06 08:57:58.802589] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:45.987 [2024-11-06 08:57:58.816909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:21:45.987 [2024-11-06 08:57:58.859949] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:21:45.987 11356.50 IOPS, 44.36 MiB/s [2024-11-06T07:58:09.001Z] 12300.86 IOPS, 48.05 MiB/s [2024-11-06T07:58:09.001Z] 13022.62 IOPS, 50.87 MiB/s [2024-11-06T07:58:09.001Z] 13464.11 IOPS, 52.59 MiB/s [2024-11-06T07:58:09.001Z] [2024-11-06 08:58:03.189335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.987 [2024-11-06 08:58:03.189571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.987 [2024-11-06 08:58:03.189585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.987 [2024-11-06 08:58:03.189807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181d00 00:21:45.987 [2024-11-06 08:58:03.189813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181d00 00:21:45.988 [2024-11-06 08:58:03.189829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181d00 00:21:45.988 [2024-11-06 08:58:03.189844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.189987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.189993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.988 [2024-11-06 08:58:03.190324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.988 [2024-11-06 08:58:03.190333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181d00 00:21:45.989 [2024-11-06 08:58:03.190340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181d00 00:21:45.989 [2024-11-06 08:58:03.190355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x181d00 00:21:45.989 [2024-11-06 08:58:03.190370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x181d00 00:21:45.989 [2024-11-06 08:58:03.190386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181d00 00:21:45.989 [2024-11-06 08:58:03.190401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181d00 00:21:45.989 [2024-11-06 08:58:03.190773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181d00 00:21:45.989 [2024-11-06 08:58:03.190788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181d00 00:21:45.989 [2024-11-06 08:58:03.190803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.989 [2024-11-06 08:58:03.190896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.989 [2024-11-06 08:58:03.190903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181d00 00:21:45.989 [2024-11-06 08:58:03.190911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.190919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.190926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.190935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.190942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.190950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.190957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.190965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.190972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.190980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.190987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.190997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181d00 00:21:45.990 [2024-11-06 08:58:03.191140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.191302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.990 [2024-11-06 08:58:03.191308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:137c5000 sqhd:7210 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.193119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.990 [2024-11-06 08:58:03.193131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.990 [2024-11-06 08:58:03.193138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76848 len:8 PRP1 0x0 PRP2 0x0 00:21:45.990 [2024-11-06 08:58:03.193145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.990 [2024-11-06 08:58:03.193186] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:21:45.990 [2024-11-06 08:58:03.193196] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:45.990 [2024-11-06 08:58:03.196042] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:45.990 [2024-11-06 08:58:03.210279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:21:45.990 12117.70 IOPS, 47.33 MiB/s [2024-11-06T07:58:09.004Z] [2024-11-06 08:58:03.254889] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:21:45.990 12638.55 IOPS, 49.37 MiB/s [2024-11-06T07:58:09.004Z] 13084.00 IOPS, 51.11 MiB/s [2024-11-06T07:58:09.004Z] 13458.77 IOPS, 52.57 MiB/s [2024-11-06T07:58:09.004Z] 13780.36 IOPS, 53.83 MiB/s [2024-11-06T07:58:09.004Z] 14061.93 IOPS, 54.93 MiB/s 00:21:45.990 Latency(us) 00:21:45.990 [2024-11-06T07:58:09.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.990 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:45.990 Verification LBA range: start 0x0 length 0x4000 00:21:45.990 NVMe0n1 : 15.00 14062.30 54.93 296.24 0.00 8892.71 358.89 1046578.71 00:21:45.990 [2024-11-06T07:58:09.004Z] =================================================================================================================== 00:21:45.990 [2024-11-06T07:58:09.004Z] Total : 14062.30 54.93 296.24 0.00 8892.71 358.89 1046578.71 00:21:45.990 Received shutdown signal, test time was about 15.000000 seconds 00:21:45.990 00:21:45.990 Latency(us) 00:21:45.990 [2024-11-06T07:58:09.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.990 [2024-11-06T07:58:09.004Z] =================================================================================================================== 00:21:45.990 [2024-11-06T07:58:09.004Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.990 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:45.990 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:45.990 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:45.990 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=506846 00:21:45.990 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 506846 /var/tmp/bdevperf.sock 00:21:45.990 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:45.990 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 506846 ']' 00:21:45.991 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.991 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.991 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.991 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.991 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:45.991 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.991 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:45.991 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:45.991 [2024-11-06 08:58:08.939975] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:45.991 08:58:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:46.248 [2024-11-06 08:58:09.124593] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:46.248 08:58:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:46.505 NVMe0n1 00:21:46.505 08:58:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:46.760 00:21:46.760 08:58:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:47.017 00:21:47.017 08:58:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.017 08:58:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:47.273 08:58:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.530 08:58:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:50.821 08:58:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.821 08:58:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:50.821 08:58:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.821 08:58:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=507679 00:21:50.821 08:58:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 507679 00:21:51.759 { 00:21:51.759 "results": [ 00:21:51.759 { 00:21:51.759 "job": "NVMe0n1", 00:21:51.759 "core_mask": "0x1", 00:21:51.759 "workload": "verify", 00:21:51.759 "status": "finished", 00:21:51.759 "verify_range": { 00:21:51.759 "start": 0, 00:21:51.759 "length": 16384 00:21:51.759 }, 00:21:51.759 "queue_depth": 128, 00:21:51.759 "io_size": 4096, 00:21:51.759 "runtime": 1.006708, 00:21:51.759 "iops": 17712.186651938795, 00:21:51.759 "mibps": 69.18822910913592, 00:21:51.759 "io_failed": 0, 00:21:51.759 "io_timeout": 0, 00:21:51.759 "avg_latency_us": 7183.100848976234, 00:21:51.759 "min_latency_us": 967.4361904761905, 00:21:51.759 "max_latency_us": 18474.910476190475 00:21:51.759 } 00:21:51.759 ], 00:21:51.759 "core_count": 1 00:21:51.759 } 00:21:51.759 08:58:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:51.759 [2024-11-06 08:58:08.570441] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:21:51.759 [2024-11-06 08:58:08.570497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506846 ] 00:21:51.759 [2024-11-06 08:58:08.647135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.759 [2024-11-06 08:58:08.684704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.759 [2024-11-06 08:58:10.315063] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:51.759 [2024-11-06 08:58:10.315676] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:21:51.759 [2024-11-06 08:58:10.315715] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:21:51.759 [2024-11-06 08:58:10.340764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:21:51.759 [2024-11-06 08:58:10.356950] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:21:51.759 Running I/O for 1 seconds... 00:21:51.759 17664.00 IOPS, 69.00 MiB/s 00:21:51.759 Latency(us) 00:21:51.759 [2024-11-06T07:58:14.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.759 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:51.759 Verification LBA range: start 0x0 length 0x4000 00:21:51.759 NVMe0n1 : 1.01 17712.19 69.19 0.00 0.00 7183.10 967.44 18474.91 00:21:51.759 [2024-11-06T07:58:14.773Z] =================================================================================================================== 00:21:51.759 [2024-11-06T07:58:14.773Z] Total : 17712.19 69.19 0.00 0.00 7183.10 967.44 18474.91 00:21:51.759 08:58:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:51.759 08:58:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:52.018 08:58:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.277 08:58:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:52.277 08:58:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:52.277 08:58:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.536 08:58:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 506846 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 506846 ']' 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 506846 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 506846 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 506846' 00:21:55.826 killing process with pid 506846 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 506846 00:21:55.826 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 506846 00:21:56.085 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:56.085 08:58:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.085 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:56.085 rmmod nvme_rdma 00:21:56.085 rmmod nvme_fabrics 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 503958 ']' 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 503958 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 503958 ']' 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 503958 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 503958 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 503958' 00:21:56.344 killing process with pid 503958 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 503958 00:21:56.344 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 503958 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:21:56.604 00:21:56.604 real 0m34.970s 00:21:56.604 user 1m58.575s 00:21:56.604 sys 0m6.147s 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:56.604 ************************************ 00:21:56.604 END TEST nvmf_failover 00:21:56.604 ************************************ 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.604 ************************************ 00:21:56.604 START TEST nvmf_host_discovery 00:21:56.604 ************************************ 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:56.604 * Looking for test storage... 00:21:56.604 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # lcov --version 00:21:56.604 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:21:56.863 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:21:56.863 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:56.863 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:56.863 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:56.863 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:21:56.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.864 --rc genhtml_branch_coverage=1 00:21:56.864 --rc genhtml_function_coverage=1 00:21:56.864 --rc genhtml_legend=1 00:21:56.864 --rc geninfo_all_blocks=1 00:21:56.864 --rc geninfo_unexecuted_blocks=1 00:21:56.864 00:21:56.864 ' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:21:56.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.864 --rc genhtml_branch_coverage=1 00:21:56.864 --rc genhtml_function_coverage=1 00:21:56.864 --rc genhtml_legend=1 00:21:56.864 --rc geninfo_all_blocks=1 00:21:56.864 --rc geninfo_unexecuted_blocks=1 00:21:56.864 00:21:56.864 ' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:21:56.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.864 --rc genhtml_branch_coverage=1 00:21:56.864 --rc genhtml_function_coverage=1 00:21:56.864 --rc genhtml_legend=1 00:21:56.864 --rc geninfo_all_blocks=1 00:21:56.864 --rc geninfo_unexecuted_blocks=1 00:21:56.864 00:21:56.864 ' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:21:56.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.864 --rc genhtml_branch_coverage=1 00:21:56.864 --rc genhtml_function_coverage=1 00:21:56.864 --rc genhtml_legend=1 00:21:56.864 --rc geninfo_all_blocks=1 00:21:56.864 --rc geninfo_unexecuted_blocks=1 00:21:56.864 00:21:56.864 ' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:56.864 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:56.864 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:21:56.864 00:21:56.864 real 0m0.210s 00:21:56.864 user 0m0.125s 00:21:56.864 sys 0m0.098s 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.864 ************************************ 00:21:56.864 END TEST nvmf_host_discovery 00:21:56.864 ************************************ 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.864 ************************************ 00:21:56.864 START TEST nvmf_host_multipath_status 00:21:56.864 ************************************ 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:56.864 * Looking for test storage... 00:21:56.864 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # lcov --version 00:21:56.864 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:21:57.123 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:21:57.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.124 --rc genhtml_branch_coverage=1 00:21:57.124 --rc genhtml_function_coverage=1 00:21:57.124 --rc genhtml_legend=1 00:21:57.124 --rc geninfo_all_blocks=1 00:21:57.124 --rc geninfo_unexecuted_blocks=1 00:21:57.124 00:21:57.124 ' 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:21:57.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.124 --rc genhtml_branch_coverage=1 00:21:57.124 --rc genhtml_function_coverage=1 00:21:57.124 --rc genhtml_legend=1 00:21:57.124 --rc geninfo_all_blocks=1 00:21:57.124 --rc geninfo_unexecuted_blocks=1 00:21:57.124 00:21:57.124 ' 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:21:57.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.124 --rc genhtml_branch_coverage=1 00:21:57.124 --rc genhtml_function_coverage=1 00:21:57.124 --rc genhtml_legend=1 00:21:57.124 --rc geninfo_all_blocks=1 00:21:57.124 --rc geninfo_unexecuted_blocks=1 00:21:57.124 00:21:57.124 ' 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:21:57.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.124 --rc genhtml_branch_coverage=1 00:21:57.124 --rc genhtml_function_coverage=1 00:21:57.124 --rc genhtml_legend=1 00:21:57.124 --rc geninfo_all_blocks=1 00:21:57.124 --rc geninfo_unexecuted_blocks=1 00:21:57.124 00:21:57.124 ' 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.124 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:57.124 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.125 08:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:03.695 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:03.695 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:03.695 Found net devices under 0000:da:00.0: mlx_0_0 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:03.695 Found net devices under 0000:da:00.1: mlx_0_1 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # rdma_device_init 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@528 -- # allocate_nic_ips 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:22:03.695 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:03.696 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:03.696 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:22:03.696 altname enp218s0f0np0 00:22:03.696 altname ens818f0np0 00:22:03.696 inet 192.168.100.8/24 scope global mlx_0_0 00:22:03.696 valid_lft forever preferred_lft forever 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:03.696 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:03.696 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:22:03.696 altname enp218s0f1np1 00:22:03.696 altname ens818f1np1 00:22:03.696 inet 192.168.100.9/24 scope global mlx_0_1 00:22:03.696 valid_lft forever preferred_lft forever 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:22:03.696 192.168.100.9' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:22:03.696 192.168.100.9' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # head -n 1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:22:03.696 192.168.100.9' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # tail -n +2 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # head -n 1 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=511948 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 511948 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 511948 ']' 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.696 08:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:03.696 [2024-11-06 08:58:25.910483] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:22:03.696 [2024-11-06 08:58:25.910528] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.696 [2024-11-06 08:58:25.984932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:03.696 [2024-11-06 08:58:26.028932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.696 [2024-11-06 08:58:26.028965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.696 [2024-11-06 08:58:26.028972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.696 [2024-11-06 08:58:26.028979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.696 [2024-11-06 08:58:26.028985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.696 [2024-11-06 08:58:26.033225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.696 [2024-11-06 08:58:26.033228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.956 08:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.956 08:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:22:03.956 08:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:03.956 08:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.956 08:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:03.956 08:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.956 08:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=511948 00:22:03.956 08:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:04.215 [2024-11-06 08:58:26.972687] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13d26e0/0x13d6bd0) succeed. 00:22:04.215 [2024-11-06 08:58:26.982415] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13d3c30/0x1418270) succeed. 00:22:04.215 08:58:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:04.473 Malloc0 00:22:04.473 08:58:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:04.473 08:58:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.732 08:58:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:04.990 [2024-11-06 08:58:27.838656] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:04.990 08:58:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:05.249 [2024-11-06 08:58:28.022949] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=512213 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 512213 /var/tmp/bdevperf.sock 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 512213 ']' 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.249 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:05.508 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.508 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:22:05.508 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:05.508 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:06.076 Nvme0n1 00:22:06.076 08:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:06.076 Nvme0n1 00:22:06.076 08:58:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:06.076 08:58:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:08.644 08:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:08.644 08:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:08.644 08:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:08.644 08:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:09.662 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:09.662 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:09.662 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.662 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:09.922 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.922 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:09.922 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.922 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:09.922 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:09.922 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:09.922 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.922 08:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:10.181 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.181 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:10.182 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:10.182 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.441 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.441 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:10.441 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.441 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:10.700 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.700 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:10.700 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:10.700 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.700 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.700 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:10.700 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:10.959 08:58:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:11.219 08:58:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:12.158 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:12.158 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:12.158 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.158 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:12.416 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:12.416 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:12.416 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.416 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:12.675 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.675 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:12.675 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.675 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:12.934 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.934 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:12.934 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:12.934 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.934 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.934 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:13.193 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.193 08:58:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:13.193 08:58:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.193 08:58:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:13.193 08:58:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.193 08:58:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:13.452 08:58:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.452 08:58:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:13.452 08:58:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:13.711 08:58:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:13.970 08:58:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:14.908 08:58:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:14.908 08:58:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:14.908 08:58:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.908 08:58:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:15.168 08:58:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.168 08:58:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:15.168 08:58:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.168 08:58:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:15.168 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.168 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:15.168 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.168 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.426 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.426 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.426 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.426 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.685 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.685 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:15.685 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.685 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.944 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.944 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:15.944 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.944 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.944 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.944 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:15.944 08:58:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:16.203 08:58:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:16.462 08:58:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:17.399 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:17.399 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:17.399 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.399 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.657 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.657 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:17.657 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.657 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.917 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.917 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.917 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.917 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:17.917 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.917 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:17.917 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.917 08:58:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.176 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.176 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:18.176 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.176 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.435 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.435 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:18.435 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.435 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:18.694 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.694 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:18.694 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:18.953 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:18.953 08:58:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:20.331 08:58:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:20.331 08:58:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:20.331 08:58:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.331 08:58:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:20.331 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.331 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:20.331 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.331 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:20.331 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.331 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:20.331 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.331 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:20.591 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.591 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:20.591 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.591 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:20.850 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.850 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:20.850 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.850 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:21.109 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.109 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:21.109 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.109 08:58:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:21.109 08:58:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.109 08:58:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:21.109 08:58:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:21.368 08:58:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:21.627 08:58:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:22.563 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:22.563 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:22.563 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.563 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:22.822 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.822 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:22.822 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.822 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:23.081 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.081 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:23.081 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.081 08:58:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:23.081 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.081 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:23.081 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.081 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:23.339 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.339 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:23.339 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.339 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:23.599 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:23.599 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:23.599 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.599 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:23.858 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.858 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:23.858 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:23.858 08:58:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:24.117 08:58:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:24.377 08:58:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:25.314 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:25.314 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:25.314 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.314 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:25.573 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.573 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:25.573 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.573 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:25.832 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.832 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:25.832 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.832 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:26.090 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.090 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:26.090 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.090 08:58:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:26.091 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.091 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:26.091 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.091 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.349 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.349 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:26.349 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.349 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.608 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.608 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:26.608 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:26.868 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:27.126 08:58:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:28.063 08:58:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:28.063 08:58:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:28.063 08:58:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.063 08:58:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.322 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.322 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:28.322 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.322 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.322 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.322 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.322 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.322 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.581 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.581 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.581 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.581 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:28.840 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.840 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:28.840 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.840 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.099 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.099 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.099 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.099 08:58:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.099 08:58:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.099 08:58:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:29.099 08:58:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:29.358 08:58:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:29.617 08:58:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:30.553 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:30.553 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:30.553 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.553 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:30.812 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.812 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:30.812 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.812 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.071 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.071 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.071 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.071 08:58:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.329 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.329 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.329 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.329 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.329 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.329 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:31.329 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.329 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:31.588 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.588 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:31.588 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.588 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:31.847 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.847 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:31.847 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:32.106 08:58:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:32.106 08:58:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:33.485 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:33.485 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:33.485 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.485 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.485 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.485 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:33.485 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.485 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:33.744 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:33.744 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:33.744 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.744 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:33.744 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.744 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:33.744 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.744 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.002 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.002 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:34.002 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.002 08:58:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.262 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.262 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:34.262 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.262 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 512213 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 512213 ']' 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 512213 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 512213 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 512213' 00:22:34.521 killing process with pid 512213 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 512213 00:22:34.521 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 512213 00:22:34.521 { 00:22:34.521 "results": [ 00:22:34.521 { 00:22:34.521 "job": "Nvme0n1", 00:22:34.521 "core_mask": "0x4", 00:22:34.521 "workload": "verify", 00:22:34.521 "status": "terminated", 00:22:34.521 "verify_range": { 00:22:34.521 "start": 0, 00:22:34.521 "length": 16384 00:22:34.521 }, 00:22:34.521 "queue_depth": 128, 00:22:34.521 "io_size": 4096, 00:22:34.521 "runtime": 28.128817, 00:22:34.521 "iops": 15680.289718547354, 00:22:34.521 "mibps": 61.2511317130756, 00:22:34.521 "io_failed": 0, 00:22:34.521 "io_timeout": 0, 00:22:34.521 "avg_latency_us": 8142.991283063146, 00:22:34.521 "min_latency_us": 413.50095238095236, 00:22:34.521 "max_latency_us": 3019898.88 00:22:34.521 } 00:22:34.521 ], 00:22:34.521 "core_count": 1 00:22:34.521 } 00:22:34.785 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 512213 00:22:34.785 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:34.785 [2024-11-06 08:58:28.090867] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:22:34.785 [2024-11-06 08:58:28.090922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512213 ] 00:22:34.785 [2024-11-06 08:58:28.165115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.785 [2024-11-06 08:58:28.205857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.785 Running I/O for 90 seconds... 00:22:34.785 17988.00 IOPS, 70.27 MiB/s [2024-11-06T07:58:57.799Z] 18098.00 IOPS, 70.70 MiB/s [2024-11-06T07:58:57.799Z] 18133.33 IOPS, 70.83 MiB/s [2024-11-06T07:58:57.799Z] 18144.00 IOPS, 70.88 MiB/s [2024-11-06T07:58:57.799Z] 18173.00 IOPS, 70.99 MiB/s [2024-11-06T07:58:57.799Z] 18213.83 IOPS, 71.15 MiB/s [2024-11-06T07:58:57.799Z] 18229.29 IOPS, 71.21 MiB/s [2024-11-06T07:58:57.799Z] 18246.50 IOPS, 71.28 MiB/s [2024-11-06T07:58:57.799Z] 18252.78 IOPS, 71.30 MiB/s [2024-11-06T07:58:57.799Z] 18243.40 IOPS, 71.26 MiB/s [2024-11-06T07:58:57.799Z] 18242.36 IOPS, 71.26 MiB/s [2024-11-06T07:58:57.799Z] 18238.58 IOPS, 71.24 MiB/s [2024-11-06T07:58:57.799Z] [2024-11-06 08:58:41.698242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.785 [2024-11-06 08:58:41.698582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:34.785 [2024-11-06 08:58:41.698773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182100 00:22:34.785 [2024-11-06 08:58:41.698779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.786 [2024-11-06 08:58:41.698794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.698992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.698999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182100 00:22:34.786 [2024-11-06 08:58:41.699302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.786 [2024-11-06 08:58:41.699317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.786 [2024-11-06 08:58:41.699332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.786 [2024-11-06 08:58:41.699348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:34.786 [2024-11-06 08:58:41.699356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.786 [2024-11-06 08:58:41.699364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.699915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.699921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182100 00:22:34.787 [2024-11-06 08:58:41.700530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:34.787 [2024-11-06 08:58:41.700782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.787 [2024-11-06 08:58:41.700789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:41.700809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:41.700831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:41.700851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:41.700871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:41.700891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:41.700911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:41.700931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.700950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.700970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.700984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.700991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:41.701192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:41.701221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:41.701254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:41.701262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.788 17537.69 IOPS, 68.51 MiB/s [2024-11-06T07:58:57.802Z] 16285.00 IOPS, 63.61 MiB/s [2024-11-06T07:58:57.802Z] 15199.33 IOPS, 59.37 MiB/s [2024-11-06T07:58:57.802Z] 14822.06 IOPS, 57.90 MiB/s [2024-11-06T07:58:57.802Z] 15031.59 IOPS, 58.72 MiB/s [2024-11-06T07:58:57.802Z] 15189.00 IOPS, 59.33 MiB/s [2024-11-06T07:58:57.802Z] 15180.32 IOPS, 59.30 MiB/s [2024-11-06T07:58:57.802Z] 15167.95 IOPS, 59.25 MiB/s [2024-11-06T07:58:57.802Z] 15239.52 IOPS, 59.53 MiB/s [2024-11-06T07:58:57.802Z] 15377.50 IOPS, 60.07 MiB/s [2024-11-06T07:58:57.802Z] 15508.48 IOPS, 60.58 MiB/s [2024-11-06T07:58:57.802Z] 15514.46 IOPS, 60.60 MiB/s [2024-11-06T07:58:57.802Z] 15487.96 IOPS, 60.50 MiB/s [2024-11-06T07:58:57.802Z] [2024-11-06 08:58:55.075682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:55.075718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.075747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:55.075755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.075765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:55.075772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:55.076308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:55.076326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:55.076344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:55.076361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:55.076378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.788 [2024-11-06 08:58:55.076404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:55.076421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:55.076438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182100 00:22:34.788 [2024-11-06 08:58:55.076454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:34.788 [2024-11-06 08:58:55.076464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.076488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.076537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.076554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.076603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.076686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.076750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.076819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.076828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.076835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.077015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.077033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.077050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.077067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.077085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.077101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.077117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.789 [2024-11-06 08:58:55.077134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182100 00:22:34.789 [2024-11-06 08:58:55.077152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:34.789 [2024-11-06 08:58:55.077161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.790 [2024-11-06 08:58:55.077168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.790 [2024-11-06 08:58:55.077241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.790 [2024-11-06 08:58:55.077273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.790 [2024-11-06 08:58:55.077338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.790 [2024-11-06 08:58:55.077357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.790 [2024-11-06 08:58:55.077389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.790 [2024-11-06 08:58:55.077455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:34.790 [2024-11-06 08:58:55.077500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182100 00:22:34.790 [2024-11-06 08:58:55.077507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:34.790 15480.58 IOPS, 60.47 MiB/s [2024-11-06T07:58:57.804Z] 15585.44 IOPS, 60.88 MiB/s [2024-11-06T07:58:57.804Z] 15675.75 IOPS, 61.23 MiB/s [2024-11-06T07:58:57.804Z] Received shutdown signal, test time was about 28.129465 seconds 00:22:34.790 00:22:34.790 Latency(us) 00:22:34.790 [2024-11-06T07:58:57.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.790 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:34.790 Verification LBA range: start 0x0 length 0x4000 00:22:34.790 Nvme0n1 : 28.13 15680.29 61.25 0.00 0.00 8142.99 413.50 3019898.88 00:22:34.790 [2024-11-06T07:58:57.804Z] =================================================================================================================== 00:22:34.790 [2024-11-06T07:58:57.804Z] Total : 15680.29 61.25 0.00 0.00 8142.99 413.50 3019898.88 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.790 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:34.790 rmmod nvme_rdma 00:22:34.790 rmmod nvme_fabrics 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 511948 ']' 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 511948 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 511948 ']' 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 511948 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 511948 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 511948' 00:22:35.050 killing process with pid 511948 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 511948 00:22:35.050 08:58:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 511948 00:22:35.308 08:58:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:22:35.309 00:22:35.309 real 0m38.326s 00:22:35.309 user 1m52.206s 00:22:35.309 sys 0m7.793s 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:35.309 ************************************ 00:22:35.309 END TEST nvmf_host_multipath_status 00:22:35.309 ************************************ 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.309 ************************************ 00:22:35.309 START TEST nvmf_discovery_remove_ifc 00:22:35.309 ************************************ 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:35.309 * Looking for test storage... 00:22:35.309 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # lcov --version 00:22:35.309 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:35.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.568 --rc genhtml_branch_coverage=1 00:22:35.568 --rc genhtml_function_coverage=1 00:22:35.568 --rc genhtml_legend=1 00:22:35.568 --rc geninfo_all_blocks=1 00:22:35.568 --rc geninfo_unexecuted_blocks=1 00:22:35.568 00:22:35.568 ' 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:35.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.568 --rc genhtml_branch_coverage=1 00:22:35.568 --rc genhtml_function_coverage=1 00:22:35.568 --rc genhtml_legend=1 00:22:35.568 --rc geninfo_all_blocks=1 00:22:35.568 --rc geninfo_unexecuted_blocks=1 00:22:35.568 00:22:35.568 ' 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:35.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.568 --rc genhtml_branch_coverage=1 00:22:35.568 --rc genhtml_function_coverage=1 00:22:35.568 --rc genhtml_legend=1 00:22:35.568 --rc geninfo_all_blocks=1 00:22:35.568 --rc geninfo_unexecuted_blocks=1 00:22:35.568 00:22:35.568 ' 00:22:35.568 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:35.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.569 --rc genhtml_branch_coverage=1 00:22:35.569 --rc genhtml_function_coverage=1 00:22:35.569 --rc genhtml_legend=1 00:22:35.569 --rc geninfo_all_blocks=1 00:22:35.569 --rc geninfo_unexecuted_blocks=1 00:22:35.569 00:22:35.569 ' 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.569 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:35.569 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:22:35.569 00:22:35.569 real 0m0.204s 00:22:35.569 user 0m0.129s 00:22:35.569 sys 0m0.089s 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.569 ************************************ 00:22:35.569 END TEST nvmf_discovery_remove_ifc 00:22:35.569 ************************************ 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.569 ************************************ 00:22:35.569 START TEST nvmf_identify_kernel_target 00:22:35.569 ************************************ 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:35.569 * Looking for test storage... 00:22:35.569 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # lcov --version 00:22:35.569 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:35.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.830 --rc genhtml_branch_coverage=1 00:22:35.830 --rc genhtml_function_coverage=1 00:22:35.830 --rc genhtml_legend=1 00:22:35.830 --rc geninfo_all_blocks=1 00:22:35.830 --rc geninfo_unexecuted_blocks=1 00:22:35.830 00:22:35.830 ' 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:35.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.830 --rc genhtml_branch_coverage=1 00:22:35.830 --rc genhtml_function_coverage=1 00:22:35.830 --rc genhtml_legend=1 00:22:35.830 --rc geninfo_all_blocks=1 00:22:35.830 --rc geninfo_unexecuted_blocks=1 00:22:35.830 00:22:35.830 ' 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:35.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.830 --rc genhtml_branch_coverage=1 00:22:35.830 --rc genhtml_function_coverage=1 00:22:35.830 --rc genhtml_legend=1 00:22:35.830 --rc geninfo_all_blocks=1 00:22:35.830 --rc geninfo_unexecuted_blocks=1 00:22:35.830 00:22:35.830 ' 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:35.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.830 --rc genhtml_branch_coverage=1 00:22:35.830 --rc genhtml_function_coverage=1 00:22:35.830 --rc genhtml_legend=1 00:22:35.830 --rc geninfo_all_blocks=1 00:22:35.830 --rc geninfo_unexecuted_blocks=1 00:22:35.830 00:22:35.830 ' 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.830 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.831 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.831 08:58:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:42.403 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:42.403 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:42.403 Found net devices under 0000:da:00.0: mlx_0_0 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:42.403 Found net devices under 0000:da:00.1: mlx_0_1 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.403 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # rdma_device_init 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:42.404 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:42.404 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:22:42.404 altname enp218s0f0np0 00:22:42.404 altname ens818f0np0 00:22:42.404 inet 192.168.100.8/24 scope global mlx_0_0 00:22:42.404 valid_lft forever preferred_lft forever 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:42.404 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:42.404 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:22:42.404 altname enp218s0f1np1 00:22:42.404 altname ens818f1np1 00:22:42.404 inet 192.168.100.9/24 scope global mlx_0_1 00:22:42.404 valid_lft forever preferred_lft forever 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:22:42.404 192.168.100.9' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:22:42.404 192.168.100.9' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # head -n 1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:22:42.404 192.168.100.9' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # tail -n +2 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # head -n 1 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:22:42.404 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:42.405 08:59:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:44.312 Waiting for block devices as requested 00:22:44.571 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:22:44.571 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:44.571 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:44.830 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:44.830 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:44.830 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:44.830 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:45.089 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:45.089 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:45.089 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:45.348 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:45.349 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:45.349 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:45.608 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:45.608 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:45.608 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:45.608 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:45.867 No valid GPT data, bailing 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 192.168.100.8 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo rdma 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:45.867 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:22:46.127 00:22:46.127 Discovery Log Number of Records 2, Generation counter 2 00:22:46.127 =====Discovery Log Entry 0====== 00:22:46.127 trtype: rdma 00:22:46.127 adrfam: ipv4 00:22:46.127 subtype: current discovery subsystem 00:22:46.127 treq: not specified, sq flow control disable supported 00:22:46.127 portid: 1 00:22:46.127 trsvcid: 4420 00:22:46.127 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:46.127 traddr: 192.168.100.8 00:22:46.127 eflags: none 00:22:46.127 rdma_prtype: not specified 00:22:46.127 rdma_qptype: connected 00:22:46.127 rdma_cms: rdma-cm 00:22:46.127 rdma_pkey: 0x0000 00:22:46.127 =====Discovery Log Entry 1====== 00:22:46.127 trtype: rdma 00:22:46.127 adrfam: ipv4 00:22:46.127 subtype: nvme subsystem 00:22:46.127 treq: not specified, sq flow control disable supported 00:22:46.127 portid: 1 00:22:46.127 trsvcid: 4420 00:22:46.127 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:46.127 traddr: 192.168.100.8 00:22:46.127 eflags: none 00:22:46.127 rdma_prtype: not specified 00:22:46.127 rdma_qptype: connected 00:22:46.127 rdma_cms: rdma-cm 00:22:46.127 rdma_pkey: 0x0000 00:22:46.127 08:59:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:22:46.127 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:46.127 ===================================================== 00:22:46.127 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:46.127 ===================================================== 00:22:46.127 Controller Capabilities/Features 00:22:46.127 ================================ 00:22:46.127 Vendor ID: 0000 00:22:46.127 Subsystem Vendor ID: 0000 00:22:46.127 Serial Number: 4ed87487f485a0aa9b8b 00:22:46.127 Model Number: Linux 00:22:46.127 Firmware Version: 6.8.9-20 00:22:46.127 Recommended Arb Burst: 0 00:22:46.127 IEEE OUI Identifier: 00 00 00 00:22:46.127 Multi-path I/O 00:22:46.127 May have multiple subsystem ports: No 00:22:46.127 May have multiple controllers: No 00:22:46.127 Associated with SR-IOV VF: No 00:22:46.127 Max Data Transfer Size: Unlimited 00:22:46.127 Max Number of Namespaces: 0 00:22:46.127 Max Number of I/O Queues: 1024 00:22:46.127 NVMe Specification Version (VS): 1.3 00:22:46.127 NVMe Specification Version (Identify): 1.3 00:22:46.127 Maximum Queue Entries: 128 00:22:46.127 Contiguous Queues Required: No 00:22:46.127 Arbitration Mechanisms Supported 00:22:46.127 Weighted Round Robin: Not Supported 00:22:46.127 Vendor Specific: Not Supported 00:22:46.127 Reset Timeout: 7500 ms 00:22:46.127 Doorbell Stride: 4 bytes 00:22:46.127 NVM Subsystem Reset: Not Supported 00:22:46.127 Command Sets Supported 00:22:46.127 NVM Command Set: Supported 00:22:46.127 Boot Partition: Not Supported 00:22:46.127 Memory Page Size Minimum: 4096 bytes 00:22:46.127 Memory Page Size Maximum: 4096 bytes 00:22:46.127 Persistent Memory Region: Not Supported 00:22:46.127 Optional Asynchronous Events Supported 00:22:46.127 Namespace Attribute Notices: Not Supported 00:22:46.127 Firmware Activation Notices: Not Supported 00:22:46.127 ANA Change Notices: Not Supported 00:22:46.127 PLE Aggregate Log Change Notices: Not Supported 00:22:46.127 LBA Status Info Alert Notices: Not Supported 00:22:46.127 EGE Aggregate Log Change Notices: Not Supported 00:22:46.127 Normal NVM Subsystem Shutdown event: Not Supported 00:22:46.127 Zone Descriptor Change Notices: Not Supported 00:22:46.127 Discovery Log Change Notices: Supported 00:22:46.127 Controller Attributes 00:22:46.127 128-bit Host Identifier: Not Supported 00:22:46.128 Non-Operational Permissive Mode: Not Supported 00:22:46.128 NVM Sets: Not Supported 00:22:46.128 Read Recovery Levels: Not Supported 00:22:46.128 Endurance Groups: Not Supported 00:22:46.128 Predictable Latency Mode: Not Supported 00:22:46.128 Traffic Based Keep ALive: Not Supported 00:22:46.128 Namespace Granularity: Not Supported 00:22:46.128 SQ Associations: Not Supported 00:22:46.128 UUID List: Not Supported 00:22:46.128 Multi-Domain Subsystem: Not Supported 00:22:46.128 Fixed Capacity Management: Not Supported 00:22:46.128 Variable Capacity Management: Not Supported 00:22:46.128 Delete Endurance Group: Not Supported 00:22:46.128 Delete NVM Set: Not Supported 00:22:46.128 Extended LBA Formats Supported: Not Supported 00:22:46.128 Flexible Data Placement Supported: Not Supported 00:22:46.128 00:22:46.128 Controller Memory Buffer Support 00:22:46.128 ================================ 00:22:46.128 Supported: No 00:22:46.128 00:22:46.128 Persistent Memory Region Support 00:22:46.128 ================================ 00:22:46.128 Supported: No 00:22:46.128 00:22:46.128 Admin Command Set Attributes 00:22:46.128 ============================ 00:22:46.128 Security Send/Receive: Not Supported 00:22:46.128 Format NVM: Not Supported 00:22:46.128 Firmware Activate/Download: Not Supported 00:22:46.128 Namespace Management: Not Supported 00:22:46.128 Device Self-Test: Not Supported 00:22:46.128 Directives: Not Supported 00:22:46.128 NVMe-MI: Not Supported 00:22:46.128 Virtualization Management: Not Supported 00:22:46.128 Doorbell Buffer Config: Not Supported 00:22:46.128 Get LBA Status Capability: Not Supported 00:22:46.128 Command & Feature Lockdown Capability: Not Supported 00:22:46.128 Abort Command Limit: 1 00:22:46.128 Async Event Request Limit: 1 00:22:46.128 Number of Firmware Slots: N/A 00:22:46.128 Firmware Slot 1 Read-Only: N/A 00:22:46.128 Firmware Activation Without Reset: N/A 00:22:46.128 Multiple Update Detection Support: N/A 00:22:46.128 Firmware Update Granularity: No Information Provided 00:22:46.128 Per-Namespace SMART Log: No 00:22:46.128 Asymmetric Namespace Access Log Page: Not Supported 00:22:46.128 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:46.128 Command Effects Log Page: Not Supported 00:22:46.128 Get Log Page Extended Data: Supported 00:22:46.128 Telemetry Log Pages: Not Supported 00:22:46.128 Persistent Event Log Pages: Not Supported 00:22:46.128 Supported Log Pages Log Page: May Support 00:22:46.128 Commands Supported & Effects Log Page: Not Supported 00:22:46.128 Feature Identifiers & Effects Log Page:May Support 00:22:46.128 NVMe-MI Commands & Effects Log Page: May Support 00:22:46.128 Data Area 4 for Telemetry Log: Not Supported 00:22:46.128 Error Log Page Entries Supported: 1 00:22:46.128 Keep Alive: Not Supported 00:22:46.128 00:22:46.128 NVM Command Set Attributes 00:22:46.128 ========================== 00:22:46.128 Submission Queue Entry Size 00:22:46.128 Max: 1 00:22:46.128 Min: 1 00:22:46.128 Completion Queue Entry Size 00:22:46.128 Max: 1 00:22:46.128 Min: 1 00:22:46.128 Number of Namespaces: 0 00:22:46.128 Compare Command: Not Supported 00:22:46.128 Write Uncorrectable Command: Not Supported 00:22:46.128 Dataset Management Command: Not Supported 00:22:46.128 Write Zeroes Command: Not Supported 00:22:46.128 Set Features Save Field: Not Supported 00:22:46.128 Reservations: Not Supported 00:22:46.128 Timestamp: Not Supported 00:22:46.128 Copy: Not Supported 00:22:46.128 Volatile Write Cache: Not Present 00:22:46.128 Atomic Write Unit (Normal): 1 00:22:46.128 Atomic Write Unit (PFail): 1 00:22:46.128 Atomic Compare & Write Unit: 1 00:22:46.128 Fused Compare & Write: Not Supported 00:22:46.128 Scatter-Gather List 00:22:46.128 SGL Command Set: Supported 00:22:46.128 SGL Keyed: Supported 00:22:46.128 SGL Bit Bucket Descriptor: Not Supported 00:22:46.128 SGL Metadata Pointer: Not Supported 00:22:46.128 Oversized SGL: Not Supported 00:22:46.128 SGL Metadata Address: Not Supported 00:22:46.128 SGL Offset: Supported 00:22:46.128 Transport SGL Data Block: Not Supported 00:22:46.128 Replay Protected Memory Block: Not Supported 00:22:46.128 00:22:46.128 Firmware Slot Information 00:22:46.128 ========================= 00:22:46.128 Active slot: 0 00:22:46.128 00:22:46.128 00:22:46.128 Error Log 00:22:46.128 ========= 00:22:46.128 00:22:46.128 Active Namespaces 00:22:46.128 ================= 00:22:46.128 Discovery Log Page 00:22:46.128 ================== 00:22:46.128 Generation Counter: 2 00:22:46.128 Number of Records: 2 00:22:46.128 Record Format: 0 00:22:46.128 00:22:46.128 Discovery Log Entry 0 00:22:46.128 ---------------------- 00:22:46.128 Transport Type: 1 (RDMA) 00:22:46.128 Address Family: 1 (IPv4) 00:22:46.128 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:46.128 Entry Flags: 00:22:46.128 Duplicate Returned Information: 0 00:22:46.128 Explicit Persistent Connection Support for Discovery: 0 00:22:46.128 Transport Requirements: 00:22:46.128 Secure Channel: Not Specified 00:22:46.128 Port ID: 1 (0x0001) 00:22:46.128 Controller ID: 65535 (0xffff) 00:22:46.128 Admin Max SQ Size: 32 00:22:46.128 Transport Service Identifier: 4420 00:22:46.128 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:46.128 Transport Address: 192.168.100.8 00:22:46.128 Transport Specific Address Subtype - RDMA 00:22:46.128 RDMA QP Service Type: 1 (Reliable Connected) 00:22:46.128 RDMA Provider Type: 1 (No provider specified) 00:22:46.128 RDMA CM Service: 1 (RDMA_CM) 00:22:46.128 Discovery Log Entry 1 00:22:46.128 ---------------------- 00:22:46.128 Transport Type: 1 (RDMA) 00:22:46.128 Address Family: 1 (IPv4) 00:22:46.128 Subsystem Type: 2 (NVM Subsystem) 00:22:46.128 Entry Flags: 00:22:46.128 Duplicate Returned Information: 0 00:22:46.128 Explicit Persistent Connection Support for Discovery: 0 00:22:46.128 Transport Requirements: 00:22:46.128 Secure Channel: Not Specified 00:22:46.128 Port ID: 1 (0x0001) 00:22:46.128 Controller ID: 65535 (0xffff) 00:22:46.128 Admin Max SQ Size: 32 00:22:46.128 Transport Service Identifier: 4420 00:22:46.128 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:46.128 Transport Address: 192.168.100.8 00:22:46.128 Transport Specific Address Subtype - RDMA 00:22:46.128 RDMA QP Service Type: 1 (Reliable Connected) 00:22:46.388 RDMA Provider Type: 1 (No provider specified) 00:22:46.388 RDMA CM Service: 1 (RDMA_CM) 00:22:46.388 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:46.388 get_feature(0x01) failed 00:22:46.388 get_feature(0x02) failed 00:22:46.388 get_feature(0x04) failed 00:22:46.388 ===================================================== 00:22:46.388 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:22:46.388 ===================================================== 00:22:46.388 Controller Capabilities/Features 00:22:46.388 ================================ 00:22:46.388 Vendor ID: 0000 00:22:46.388 Subsystem Vendor ID: 0000 00:22:46.388 Serial Number: 94e2de9c706a5158f08a 00:22:46.388 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:46.388 Firmware Version: 6.8.9-20 00:22:46.388 Recommended Arb Burst: 6 00:22:46.388 IEEE OUI Identifier: 00 00 00 00:22:46.388 Multi-path I/O 00:22:46.389 May have multiple subsystem ports: Yes 00:22:46.389 May have multiple controllers: Yes 00:22:46.389 Associated with SR-IOV VF: No 00:22:46.389 Max Data Transfer Size: 1048576 00:22:46.389 Max Number of Namespaces: 1024 00:22:46.389 Max Number of I/O Queues: 128 00:22:46.389 NVMe Specification Version (VS): 1.3 00:22:46.389 NVMe Specification Version (Identify): 1.3 00:22:46.389 Maximum Queue Entries: 128 00:22:46.389 Contiguous Queues Required: No 00:22:46.389 Arbitration Mechanisms Supported 00:22:46.389 Weighted Round Robin: Not Supported 00:22:46.389 Vendor Specific: Not Supported 00:22:46.389 Reset Timeout: 7500 ms 00:22:46.389 Doorbell Stride: 4 bytes 00:22:46.389 NVM Subsystem Reset: Not Supported 00:22:46.389 Command Sets Supported 00:22:46.389 NVM Command Set: Supported 00:22:46.389 Boot Partition: Not Supported 00:22:46.389 Memory Page Size Minimum: 4096 bytes 00:22:46.389 Memory Page Size Maximum: 4096 bytes 00:22:46.389 Persistent Memory Region: Not Supported 00:22:46.389 Optional Asynchronous Events Supported 00:22:46.389 Namespace Attribute Notices: Supported 00:22:46.389 Firmware Activation Notices: Not Supported 00:22:46.389 ANA Change Notices: Supported 00:22:46.389 PLE Aggregate Log Change Notices: Not Supported 00:22:46.389 LBA Status Info Alert Notices: Not Supported 00:22:46.389 EGE Aggregate Log Change Notices: Not Supported 00:22:46.389 Normal NVM Subsystem Shutdown event: Not Supported 00:22:46.389 Zone Descriptor Change Notices: Not Supported 00:22:46.389 Discovery Log Change Notices: Not Supported 00:22:46.389 Controller Attributes 00:22:46.389 128-bit Host Identifier: Supported 00:22:46.389 Non-Operational Permissive Mode: Not Supported 00:22:46.389 NVM Sets: Not Supported 00:22:46.389 Read Recovery Levels: Not Supported 00:22:46.389 Endurance Groups: Not Supported 00:22:46.389 Predictable Latency Mode: Not Supported 00:22:46.389 Traffic Based Keep ALive: Supported 00:22:46.389 Namespace Granularity: Not Supported 00:22:46.389 SQ Associations: Not Supported 00:22:46.389 UUID List: Not Supported 00:22:46.389 Multi-Domain Subsystem: Not Supported 00:22:46.389 Fixed Capacity Management: Not Supported 00:22:46.389 Variable Capacity Management: Not Supported 00:22:46.389 Delete Endurance Group: Not Supported 00:22:46.389 Delete NVM Set: Not Supported 00:22:46.389 Extended LBA Formats Supported: Not Supported 00:22:46.389 Flexible Data Placement Supported: Not Supported 00:22:46.389 00:22:46.389 Controller Memory Buffer Support 00:22:46.389 ================================ 00:22:46.389 Supported: No 00:22:46.389 00:22:46.389 Persistent Memory Region Support 00:22:46.389 ================================ 00:22:46.389 Supported: No 00:22:46.389 00:22:46.389 Admin Command Set Attributes 00:22:46.389 ============================ 00:22:46.389 Security Send/Receive: Not Supported 00:22:46.389 Format NVM: Not Supported 00:22:46.389 Firmware Activate/Download: Not Supported 00:22:46.389 Namespace Management: Not Supported 00:22:46.389 Device Self-Test: Not Supported 00:22:46.389 Directives: Not Supported 00:22:46.389 NVMe-MI: Not Supported 00:22:46.389 Virtualization Management: Not Supported 00:22:46.389 Doorbell Buffer Config: Not Supported 00:22:46.389 Get LBA Status Capability: Not Supported 00:22:46.389 Command & Feature Lockdown Capability: Not Supported 00:22:46.389 Abort Command Limit: 4 00:22:46.389 Async Event Request Limit: 4 00:22:46.389 Number of Firmware Slots: N/A 00:22:46.389 Firmware Slot 1 Read-Only: N/A 00:22:46.389 Firmware Activation Without Reset: N/A 00:22:46.389 Multiple Update Detection Support: N/A 00:22:46.389 Firmware Update Granularity: No Information Provided 00:22:46.389 Per-Namespace SMART Log: Yes 00:22:46.389 Asymmetric Namespace Access Log Page: Supported 00:22:46.389 ANA Transition Time : 10 sec 00:22:46.389 00:22:46.389 Asymmetric Namespace Access Capabilities 00:22:46.389 ANA Optimized State : Supported 00:22:46.389 ANA Non-Optimized State : Supported 00:22:46.389 ANA Inaccessible State : Supported 00:22:46.389 ANA Persistent Loss State : Supported 00:22:46.389 ANA Change State : Supported 00:22:46.389 ANAGRPID is not changed : No 00:22:46.389 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:46.389 00:22:46.389 ANA Group Identifier Maximum : 128 00:22:46.389 Number of ANA Group Identifiers : 128 00:22:46.389 Max Number of Allowed Namespaces : 1024 00:22:46.389 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:46.389 Command Effects Log Page: Supported 00:22:46.389 Get Log Page Extended Data: Supported 00:22:46.389 Telemetry Log Pages: Not Supported 00:22:46.389 Persistent Event Log Pages: Not Supported 00:22:46.389 Supported Log Pages Log Page: May Support 00:22:46.389 Commands Supported & Effects Log Page: Not Supported 00:22:46.389 Feature Identifiers & Effects Log Page:May Support 00:22:46.389 NVMe-MI Commands & Effects Log Page: May Support 00:22:46.389 Data Area 4 for Telemetry Log: Not Supported 00:22:46.389 Error Log Page Entries Supported: 128 00:22:46.389 Keep Alive: Supported 00:22:46.389 Keep Alive Granularity: 1000 ms 00:22:46.389 00:22:46.389 NVM Command Set Attributes 00:22:46.389 ========================== 00:22:46.389 Submission Queue Entry Size 00:22:46.389 Max: 64 00:22:46.389 Min: 64 00:22:46.389 Completion Queue Entry Size 00:22:46.389 Max: 16 00:22:46.389 Min: 16 00:22:46.389 Number of Namespaces: 1024 00:22:46.389 Compare Command: Not Supported 00:22:46.389 Write Uncorrectable Command: Not Supported 00:22:46.389 Dataset Management Command: Supported 00:22:46.389 Write Zeroes Command: Supported 00:22:46.389 Set Features Save Field: Not Supported 00:22:46.389 Reservations: Not Supported 00:22:46.389 Timestamp: Not Supported 00:22:46.389 Copy: Not Supported 00:22:46.389 Volatile Write Cache: Present 00:22:46.389 Atomic Write Unit (Normal): 1 00:22:46.389 Atomic Write Unit (PFail): 1 00:22:46.389 Atomic Compare & Write Unit: 1 00:22:46.389 Fused Compare & Write: Not Supported 00:22:46.389 Scatter-Gather List 00:22:46.389 SGL Command Set: Supported 00:22:46.389 SGL Keyed: Supported 00:22:46.389 SGL Bit Bucket Descriptor: Not Supported 00:22:46.389 SGL Metadata Pointer: Not Supported 00:22:46.389 Oversized SGL: Not Supported 00:22:46.389 SGL Metadata Address: Not Supported 00:22:46.389 SGL Offset: Supported 00:22:46.389 Transport SGL Data Block: Not Supported 00:22:46.389 Replay Protected Memory Block: Not Supported 00:22:46.389 00:22:46.389 Firmware Slot Information 00:22:46.389 ========================= 00:22:46.389 Active slot: 0 00:22:46.389 00:22:46.389 Asymmetric Namespace Access 00:22:46.389 =========================== 00:22:46.389 Change Count : 0 00:22:46.389 Number of ANA Group Descriptors : 1 00:22:46.389 ANA Group Descriptor : 0 00:22:46.389 ANA Group ID : 1 00:22:46.389 Number of NSID Values : 1 00:22:46.389 Change Count : 0 00:22:46.389 ANA State : 1 00:22:46.389 Namespace Identifier : 1 00:22:46.389 00:22:46.389 Commands Supported and Effects 00:22:46.389 ============================== 00:22:46.389 Admin Commands 00:22:46.389 -------------- 00:22:46.389 Get Log Page (02h): Supported 00:22:46.389 Identify (06h): Supported 00:22:46.389 Abort (08h): Supported 00:22:46.389 Set Features (09h): Supported 00:22:46.389 Get Features (0Ah): Supported 00:22:46.389 Asynchronous Event Request (0Ch): Supported 00:22:46.389 Keep Alive (18h): Supported 00:22:46.389 I/O Commands 00:22:46.389 ------------ 00:22:46.389 Flush (00h): Supported 00:22:46.389 Write (01h): Supported LBA-Change 00:22:46.389 Read (02h): Supported 00:22:46.389 Write Zeroes (08h): Supported LBA-Change 00:22:46.389 Dataset Management (09h): Supported 00:22:46.389 00:22:46.389 Error Log 00:22:46.389 ========= 00:22:46.389 Entry: 0 00:22:46.389 Error Count: 0x3 00:22:46.389 Submission Queue Id: 0x0 00:22:46.389 Command Id: 0x5 00:22:46.389 Phase Bit: 0 00:22:46.389 Status Code: 0x2 00:22:46.389 Status Code Type: 0x0 00:22:46.389 Do Not Retry: 1 00:22:46.389 Error Location: 0x28 00:22:46.389 LBA: 0x0 00:22:46.389 Namespace: 0x0 00:22:46.389 Vendor Log Page: 0x0 00:22:46.389 ----------- 00:22:46.389 Entry: 1 00:22:46.389 Error Count: 0x2 00:22:46.389 Submission Queue Id: 0x0 00:22:46.389 Command Id: 0x5 00:22:46.389 Phase Bit: 0 00:22:46.389 Status Code: 0x2 00:22:46.389 Status Code Type: 0x0 00:22:46.389 Do Not Retry: 1 00:22:46.389 Error Location: 0x28 00:22:46.389 LBA: 0x0 00:22:46.389 Namespace: 0x0 00:22:46.389 Vendor Log Page: 0x0 00:22:46.389 ----------- 00:22:46.389 Entry: 2 00:22:46.389 Error Count: 0x1 00:22:46.389 Submission Queue Id: 0x0 00:22:46.390 Command Id: 0x0 00:22:46.390 Phase Bit: 0 00:22:46.390 Status Code: 0x2 00:22:46.390 Status Code Type: 0x0 00:22:46.390 Do Not Retry: 1 00:22:46.390 Error Location: 0x28 00:22:46.390 LBA: 0x0 00:22:46.390 Namespace: 0x0 00:22:46.390 Vendor Log Page: 0x0 00:22:46.390 00:22:46.390 Number of Queues 00:22:46.390 ================ 00:22:46.390 Number of I/O Submission Queues: 128 00:22:46.390 Number of I/O Completion Queues: 128 00:22:46.390 00:22:46.390 ZNS Specific Controller Data 00:22:46.390 ============================ 00:22:46.390 Zone Append Size Limit: 0 00:22:46.390 00:22:46.390 00:22:46.390 Active Namespaces 00:22:46.390 ================= 00:22:46.390 get_feature(0x05) failed 00:22:46.390 Namespace ID:1 00:22:46.390 Command Set Identifier: NVM (00h) 00:22:46.390 Deallocate: Supported 00:22:46.390 Deallocated/Unwritten Error: Not Supported 00:22:46.390 Deallocated Read Value: Unknown 00:22:46.390 Deallocate in Write Zeroes: Not Supported 00:22:46.390 Deallocated Guard Field: 0xFFFF 00:22:46.390 Flush: Supported 00:22:46.390 Reservation: Not Supported 00:22:46.390 Namespace Sharing Capabilities: Multiple Controllers 00:22:46.390 Size (in LBAs): 3125627568 (1490GiB) 00:22:46.390 Capacity (in LBAs): 3125627568 (1490GiB) 00:22:46.390 Utilization (in LBAs): 3125627568 (1490GiB) 00:22:46.390 UUID: 9167db3e-baf4-4816-971b-6db86038cb9e 00:22:46.390 Thin Provisioning: Not Supported 00:22:46.390 Per-NS Atomic Units: Yes 00:22:46.390 Atomic Boundary Size (Normal): 0 00:22:46.390 Atomic Boundary Size (PFail): 0 00:22:46.390 Atomic Boundary Offset: 0 00:22:46.390 NGUID/EUI64 Never Reused: No 00:22:46.390 ANA group ID: 1 00:22:46.390 Namespace Write Protected: No 00:22:46.390 Number of LBA Formats: 1 00:22:46.390 Current LBA Format: LBA Format #00 00:22:46.390 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:46.390 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:46.390 rmmod nvme_rdma 00:22:46.390 rmmod nvme_fabrics 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_rdma nvmet 00:22:46.390 08:59:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:49.682 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:49.682 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:51.062 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:22:51.062 00:22:51.062 real 0m15.366s 00:22:51.062 user 0m4.379s 00:22:51.062 sys 0m8.912s 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.062 ************************************ 00:22:51.062 END TEST nvmf_identify_kernel_target 00:22:51.062 ************************************ 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.062 ************************************ 00:22:51.062 START TEST nvmf_auth_host 00:22:51.062 ************************************ 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:51.062 * Looking for test storage... 00:22:51.062 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # lcov --version 00:22:51.062 08:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:51.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.062 --rc genhtml_branch_coverage=1 00:22:51.062 --rc genhtml_function_coverage=1 00:22:51.062 --rc genhtml_legend=1 00:22:51.062 --rc geninfo_all_blocks=1 00:22:51.062 --rc geninfo_unexecuted_blocks=1 00:22:51.062 00:22:51.062 ' 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:51.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.062 --rc genhtml_branch_coverage=1 00:22:51.062 --rc genhtml_function_coverage=1 00:22:51.062 --rc genhtml_legend=1 00:22:51.062 --rc geninfo_all_blocks=1 00:22:51.062 --rc geninfo_unexecuted_blocks=1 00:22:51.062 00:22:51.062 ' 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:51.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.062 --rc genhtml_branch_coverage=1 00:22:51.062 --rc genhtml_function_coverage=1 00:22:51.062 --rc genhtml_legend=1 00:22:51.062 --rc geninfo_all_blocks=1 00:22:51.062 --rc geninfo_unexecuted_blocks=1 00:22:51.062 00:22:51.062 ' 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:51.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.062 --rc genhtml_branch_coverage=1 00:22:51.062 --rc genhtml_function_coverage=1 00:22:51.062 --rc genhtml_legend=1 00:22:51.062 --rc geninfo_all_blocks=1 00:22:51.062 --rc geninfo_unexecuted_blocks=1 00:22:51.062 00:22:51.062 ' 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.062 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.063 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.322 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.323 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.323 08:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.005 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:58.006 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:58.006 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:58.006 Found net devices under 0000:da:00.0: mlx_0_0 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:58.006 Found net devices under 0000:da:00.1: mlx_0_1 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # rdma_device_init 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # allocate_nic_ips 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:58.006 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:58.006 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:22:58.006 altname enp218s0f0np0 00:22:58.006 altname ens818f0np0 00:22:58.006 inet 192.168.100.8/24 scope global mlx_0_0 00:22:58.006 valid_lft forever preferred_lft forever 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:58.006 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:58.006 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:22:58.006 altname enp218s0f1np1 00:22:58.006 altname ens818f1np1 00:22:58.006 inet 192.168.100.9/24 scope global mlx_0_1 00:22:58.006 valid_lft forever preferred_lft forever 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:58.006 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:22:58.007 192.168.100.9' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:22:58.007 192.168.100.9' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # head -n 1 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:22:58.007 192.168.100.9' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # tail -n +2 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # head -n 1 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=526700 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 526700 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 526700 ']' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.007 08:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=24ddbd9d516cf5ebc6db2fd4fb2d53b0 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.BfG 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 24ddbd9d516cf5ebc6db2fd4fb2d53b0 0 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 24ddbd9d516cf5ebc6db2fd4fb2d53b0 0 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=24ddbd9d516cf5ebc6db2fd4fb2d53b0 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.BfG 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.BfG 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.BfG 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e902037e1d9f1349ba87870cfb3f554fbbcec0a9ff79bdc6b1c8e4d97647ca9c 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.QqU 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e902037e1d9f1349ba87870cfb3f554fbbcec0a9ff79bdc6b1c8e4d97647ca9c 3 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e902037e1d9f1349ba87870cfb3f554fbbcec0a9ff79bdc6b1c8e4d97647ca9c 3 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e902037e1d9f1349ba87870cfb3f554fbbcec0a9ff79bdc6b1c8e4d97647ca9c 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.QqU 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.QqU 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.QqU 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=bcc5fce31543df8cc152d9c60976dff12dffa4e0ddf9199f 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.NLX 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key bcc5fce31543df8cc152d9c60976dff12dffa4e0ddf9199f 0 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 bcc5fce31543df8cc152d9c60976dff12dffa4e0ddf9199f 0 00:22:58.007 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=bcc5fce31543df8cc152d9c60976dff12dffa4e0ddf9199f 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.NLX 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.NLX 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NLX 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4b91f2a26df1687dc223f0cf41bb065c6a2b69652a72a581 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.m85 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4b91f2a26df1687dc223f0cf41bb065c6a2b69652a72a581 2 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4b91f2a26df1687dc223f0cf41bb065c6a2b69652a72a581 2 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4b91f2a26df1687dc223f0cf41bb065c6a2b69652a72a581 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.m85 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.m85 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.m85 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a0f3c0b2d4101197a80e02c94393f7fd 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Dx3 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a0f3c0b2d4101197a80e02c94393f7fd 1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a0f3c0b2d4101197a80e02c94393f7fd 1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a0f3c0b2d4101197a80e02c94393f7fd 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Dx3 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Dx3 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Dx3 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=bf911317450bbba0a5f2881e0cb43a12 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.txT 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key bf911317450bbba0a5f2881e0cb43a12 1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 bf911317450bbba0a5f2881e0cb43a12 1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=bf911317450bbba0a5f2881e0cb43a12 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.txT 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.txT 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.txT 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=53def90d41fa9c2813dd857e9d81451d371fe778e16f959d 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.P97 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 53def90d41fa9c2813dd857e9d81451d371fe778e16f959d 2 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 53def90d41fa9c2813dd857e9d81451d371fe778e16f959d 2 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=53def90d41fa9c2813dd857e9d81451d371fe778e16f959d 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.P97 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.P97 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.P97 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=95c7bfa9ebc8acf1dae1f548d64cf801 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.g8u 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 95c7bfa9ebc8acf1dae1f548d64cf801 0 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 95c7bfa9ebc8acf1dae1f548d64cf801 0 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=95c7bfa9ebc8acf1dae1f548d64cf801 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.g8u 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.g8u 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.g8u 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:22:58.008 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=aed3905483b03db58b925a08eaee731ef10a8fd3e49620fa00e62b7ae955594d 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.qvH 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key aed3905483b03db58b925a08eaee731ef10a8fd3e49620fa00e62b7ae955594d 3 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 aed3905483b03db58b925a08eaee731ef10a8fd3e49620fa00e62b7ae955594d 3 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=aed3905483b03db58b925a08eaee731ef10a8fd3e49620fa00e62b7ae955594d 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.qvH 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.qvH 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qvH 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 526700 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 526700 ']' 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BfG 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.QqU ]] 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QqU 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.009 08:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NLX 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.m85 ]] 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.m85 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.009 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Dx3 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.txT ]] 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.txT 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.P97 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.g8u ]] 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.g8u 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qvH 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.268 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:58.269 08:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:23:00.802 Waiting for block devices as requested 00:23:00.802 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:23:01.061 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:01.061 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:01.061 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:01.320 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:01.320 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:01.320 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:01.320 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:01.579 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:01.579 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:01.579 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:01.579 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:01.839 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:01.839 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:01.839 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:02.098 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:02.098 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:02.667 No valid GPT data, bailing 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 192.168.100.8 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo rdma 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:02.667 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:23:02.927 00:23:02.927 Discovery Log Number of Records 2, Generation counter 2 00:23:02.927 =====Discovery Log Entry 0====== 00:23:02.927 trtype: rdma 00:23:02.927 adrfam: ipv4 00:23:02.927 subtype: current discovery subsystem 00:23:02.927 treq: not specified, sq flow control disable supported 00:23:02.927 portid: 1 00:23:02.927 trsvcid: 4420 00:23:02.927 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:02.927 traddr: 192.168.100.8 00:23:02.927 eflags: none 00:23:02.927 rdma_prtype: not specified 00:23:02.927 rdma_qptype: connected 00:23:02.927 rdma_cms: rdma-cm 00:23:02.927 rdma_pkey: 0x0000 00:23:02.927 =====Discovery Log Entry 1====== 00:23:02.927 trtype: rdma 00:23:02.927 adrfam: ipv4 00:23:02.927 subtype: nvme subsystem 00:23:02.927 treq: not specified, sq flow control disable supported 00:23:02.927 portid: 1 00:23:02.927 trsvcid: 4420 00:23:02.927 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:02.927 traddr: 192.168.100.8 00:23:02.927 eflags: none 00:23:02.927 rdma_prtype: not specified 00:23:02.927 rdma_qptype: connected 00:23:02.927 rdma_cms: rdma-cm 00:23:02.927 rdma_pkey: 0x0000 00:23:02.927 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:02.927 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:02.927 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:02.927 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:02.927 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.928 08:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.187 nvme0n1 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:03.187 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.188 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.188 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.188 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:03.188 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.188 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.188 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.188 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.188 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.447 nvme0n1 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.447 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.706 nvme0n1 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.706 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.966 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.225 nvme0n1 00:23:04.225 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.225 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.225 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.225 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.225 08:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:04.225 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.226 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.226 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:04.226 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:04.226 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:04.226 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.226 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.485 nvme0n1 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.485 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.744 nvme0n1 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.744 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:05.003 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:05.003 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.004 08:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.263 nvme0n1 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:05.263 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.264 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.523 nvme0n1 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:05.523 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.524 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.784 nvme0n1 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.784 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.044 08:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.044 nvme0n1 00:23:06.044 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.044 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.044 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.044 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.044 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.303 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.303 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.304 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.563 nvme0n1 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.563 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.861 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.862 08:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.429 nvme0n1 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.429 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.688 nvme0n1 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.688 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.689 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.256 nvme0n1 00:23:08.256 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.256 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.256 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.256 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.256 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.256 08:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.256 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.256 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.256 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.256 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.256 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.256 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.256 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:08.256 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.256 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.257 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.516 nvme0n1 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.516 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.775 nvme0n1 00:23:08.775 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.775 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.775 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.775 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.775 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.775 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.033 08:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:10.410 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:10.410 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:10.410 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.411 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.670 nvme0n1 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:10.670 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.671 08:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.239 nvme0n1 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.239 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.807 nvme0n1 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.808 08:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.376 nvme0n1 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.376 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.635 nvme0n1 00:23:12.635 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.635 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.635 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.635 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.635 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.635 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.635 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.635 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.635 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:12.894 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.895 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.895 08:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.463 nvme0n1 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:13.463 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.464 08:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.031 nvme0n1 00:23:14.031 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.031 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.031 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.031 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.031 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.031 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:14.290 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.291 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.859 nvme0n1 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.859 08:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.428 nvme0n1 00:23:15.428 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.428 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.428 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.428 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.428 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.687 08:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 nvme0n1 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:16.255 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.256 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.516 nvme0n1 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.516 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.776 nvme0n1 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.776 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:17.034 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.035 08:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.035 nvme0n1 00:23:17.035 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.035 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.035 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.035 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.035 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.035 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.294 nvme0n1 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.294 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.553 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.554 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.813 nvme0n1 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.813 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.073 nvme0n1 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.073 08:59:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.333 nvme0n1 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.333 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.593 nvme0n1 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.593 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.851 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.851 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.851 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.852 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.111 nvme0n1 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:19.111 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.112 08:59:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.370 nvme0n1 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:19.370 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.371 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.630 nvme0n1 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.630 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.889 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.148 nvme0n1 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.148 08:59:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:20.148 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:20.149 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.149 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.149 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.408 nvme0n1 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.408 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.667 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.926 nvme0n1 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.926 08:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.185 nvme0n1 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.185 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.444 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.703 nvme0n1 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.703 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.962 08:59:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.221 nvme0n1 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.221 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.480 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.739 nvme0n1 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.739 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.740 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.000 08:59:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.259 nvme0n1 00:23:23.259 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.259 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.259 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.259 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.259 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.259 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.259 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.260 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.828 nvme0n1 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.828 08:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.395 nvme0n1 00:23:24.395 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.395 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.395 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.395 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.395 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.395 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.655 08:59:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.224 nvme0n1 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.224 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.792 nvme0n1 00:23:25.792 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.792 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.792 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.792 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.792 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.052 08:59:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.620 nvme0n1 00:23:26.620 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.620 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.620 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.620 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.621 08:59:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.556 nvme0n1 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.556 nvme0n1 00:23:27.556 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.557 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.557 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.557 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.557 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.557 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.816 nvme0n1 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.816 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.076 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.077 08:59:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.337 nvme0n1 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.337 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.596 nvme0n1 00:23:28.596 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.596 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.596 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.596 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.597 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.856 nvme0n1 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.857 08:59:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.117 nvme0n1 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.117 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.377 nvme0n1 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.377 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:29.636 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.637 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.637 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.637 nvme0n1 00:23:29.637 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.637 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.637 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.637 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.637 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.637 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:29.896 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.897 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.156 nvme0n1 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.157 08:59:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.157 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.416 nvme0n1 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:30.416 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:30.417 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:30.417 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:30.417 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.417 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.417 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.675 nvme0n1 00:23:30.675 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.675 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.675 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.675 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.675 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.675 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.675 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.675 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.675 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.676 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.935 08:59:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.194 nvme0n1 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:31.194 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.195 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.195 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.453 nvme0n1 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.453 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.713 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.972 nvme0n1 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:31.972 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.973 08:59:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.232 nvme0n1 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.232 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.491 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.751 nvme0n1 00:23:32.751 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.751 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.751 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.751 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.751 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.751 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.010 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.011 08:59:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.270 nvme0n1 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.270 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.529 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.788 nvme0n1 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.788 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.048 08:59:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.307 nvme0n1 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.307 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.566 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.874 nvme0n1 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.874 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjRkZGJkOWQ1MTZjZjVlYmM2ZGIyZmQ0ZmIyZDUzYjAiVtmE: 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: ]] 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTkwMjAzN2UxZDlmMTM0OWJhODc4NzBjZmIzZjU1NGZiYmNlYzBhOWZmNzliZGM2YjFjOGU0ZDk3NjQ3Y2E5Y86qZgQ=: 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.875 08:59:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.514 nvme0n1 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.514 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.773 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.773 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.773 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:35.773 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.773 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.773 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.774 08:59:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.342 nvme0n1 00:23:36.342 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.342 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.342 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.342 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.343 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.911 nvme0n1 00:23:36.911 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.911 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.911 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.911 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.911 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.911 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTNkZWY5MGQ0MWZhOWMyODEzZGQ4NTdlOWQ4MTQ1MWQzNzFmZTc3OGUxNmY5NTlkvwIiUw==: 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: ]] 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTVjN2JmYTllYmM4YWNmMWRhZTFmNTQ4ZDY0Y2Y4MDFD+7yf: 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:37.170 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:37.171 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.171 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.171 08:59:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.739 nvme0n1 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVkMzkwNTQ4M2IwM2RiNThiOTI1YTA4ZWFlZTczMWVmMTBhOGZkM2U0OTYyMGZhMDBlNjJiN2FlOTU1NTk0ZPT7f5o=: 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.739 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.740 09:00:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.307 nvme0n1 00:23:38.307 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.307 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.307 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.307 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.307 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.567 request: 00:23:38.567 { 00:23:38.567 "name": "nvme0", 00:23:38.567 "trtype": "rdma", 00:23:38.567 "traddr": "192.168.100.8", 00:23:38.567 "adrfam": "ipv4", 00:23:38.567 "trsvcid": "4420", 00:23:38.567 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:38.567 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:38.567 "prchk_reftag": false, 00:23:38.567 "prchk_guard": false, 00:23:38.567 "hdgst": false, 00:23:38.567 "ddgst": false, 00:23:38.567 "allow_unrecognized_csi": false, 00:23:38.567 "method": "bdev_nvme_attach_controller", 00:23:38.567 "req_id": 1 00:23:38.567 } 00:23:38.567 Got JSON-RPC error response 00:23:38.567 response: 00:23:38.567 { 00:23:38.567 "code": -5, 00:23:38.567 "message": "Input/output error" 00:23:38.567 } 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.567 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.826 request: 00:23:38.826 { 00:23:38.826 "name": "nvme0", 00:23:38.826 "trtype": "rdma", 00:23:38.826 "traddr": "192.168.100.8", 00:23:38.826 "adrfam": "ipv4", 00:23:38.826 "trsvcid": "4420", 00:23:38.826 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:38.826 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:38.826 "prchk_reftag": false, 00:23:38.826 "prchk_guard": false, 00:23:38.826 "hdgst": false, 00:23:38.826 "ddgst": false, 00:23:38.826 "dhchap_key": "key2", 00:23:38.826 "allow_unrecognized_csi": false, 00:23:38.826 "method": "bdev_nvme_attach_controller", 00:23:38.826 "req_id": 1 00:23:38.826 } 00:23:38.826 Got JSON-RPC error response 00:23:38.826 response: 00:23:38.826 { 00:23:38.826 "code": -5, 00:23:38.826 "message": "Input/output error" 00:23:38.826 } 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.826 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.826 request: 00:23:38.826 { 00:23:38.826 "name": "nvme0", 00:23:38.826 "trtype": "rdma", 00:23:38.826 "traddr": "192.168.100.8", 00:23:38.826 "adrfam": "ipv4", 00:23:38.826 "trsvcid": "4420", 00:23:38.826 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:38.826 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:38.826 "prchk_reftag": false, 00:23:38.826 "prchk_guard": false, 00:23:38.826 "hdgst": false, 00:23:38.826 "ddgst": false, 00:23:38.826 "dhchap_key": "key1", 00:23:38.826 "dhchap_ctrlr_key": "ckey2", 00:23:38.826 "allow_unrecognized_csi": false, 00:23:38.826 "method": "bdev_nvme_attach_controller", 00:23:38.826 "req_id": 1 00:23:38.826 } 00:23:38.826 Got JSON-RPC error response 00:23:38.827 response: 00:23:38.827 { 00:23:38.827 "code": -5, 00:23:38.827 "message": "Input/output error" 00:23:38.827 } 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.827 09:00:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.085 nvme0n1 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.085 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.345 request: 00:23:39.345 { 00:23:39.345 "name": "nvme0", 00:23:39.345 "dhchap_key": "key1", 00:23:39.345 "dhchap_ctrlr_key": "ckey2", 00:23:39.345 "method": "bdev_nvme_set_keys", 00:23:39.345 "req_id": 1 00:23:39.345 } 00:23:39.345 Got JSON-RPC error response 00:23:39.345 response: 00:23:39.345 { 00:23:39.345 "code": -13, 00:23:39.345 "message": "Permission denied" 00:23:39.345 } 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:39.345 09:00:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:40.281 09:00:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.281 09:00:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:40.281 09:00:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.281 09:00:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.281 09:00:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.281 09:00:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:40.281 09:00:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.661 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNjNWZjZTMxNTQzZGY4Y2MxNTJkOWM2MDk3NmRmZjEyZGZmYTRlMGRkZjkxOTlm+LQ0kw==: 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: ]] 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGI5MWYyYTI2ZGYxNjg3ZGMyMjNmMGNmNDFiYjA2NWM2YTJiNjk2NTJhNzJhNTgxsIBTfA==: 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.662 nvme0n1 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmM2MwYjJkNDEwMTE5N2E4MGUwMmM5NDM5M2Y3ZmS4XcoK: 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: ]] 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY5MTEzMTc0NTBiYmJhMGE1ZjI4ODFlMGNiNDNhMTJN8xSD: 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.662 request: 00:23:41.662 { 00:23:41.662 "name": "nvme0", 00:23:41.662 "dhchap_key": "key2", 00:23:41.662 "dhchap_ctrlr_key": "ckey1", 00:23:41.662 "method": "bdev_nvme_set_keys", 00:23:41.662 "req_id": 1 00:23:41.662 } 00:23:41.662 Got JSON-RPC error response 00:23:41.662 response: 00:23:41.662 { 00:23:41.662 "code": -13, 00:23:41.662 "message": "Permission denied" 00:23:41.662 } 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:41.662 09:00:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:43.037 09:00:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.037 09:00:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:43.037 09:00:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.037 09:00:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.037 09:00:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.037 09:00:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:43.037 09:00:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:43.975 rmmod nvme_rdma 00:23:43.975 rmmod nvme_fabrics 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 526700 ']' 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 526700 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 526700 ']' 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 526700 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 526700 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 526700' 00:23:43.975 killing process with pid 526700 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 526700 00:23:43.975 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 526700 00:23:44.235 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:44.235 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:23:44.235 09:00:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_rdma nvmet 00:23:44.235 09:00:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:47.528 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:47.528 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:48.466 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:23:48.725 09:00:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.BfG /tmp/spdk.key-null.NLX /tmp/spdk.key-sha256.Dx3 /tmp/spdk.key-sha384.P97 /tmp/spdk.key-sha512.qvH /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:23:48.725 09:00:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:51.260 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:51.260 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:51.260 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:51.519 00:23:51.519 real 1m0.503s 00:23:51.519 user 0m56.244s 00:23:51.519 sys 0m12.817s 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.519 ************************************ 00:23:51.519 END TEST nvmf_auth_host 00:23:51.519 ************************************ 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.519 ************************************ 00:23:51.519 START TEST nvmf_bdevperf 00:23:51.519 ************************************ 00:23:51.519 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:51.779 * Looking for test storage... 00:23:51.779 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # lcov --version 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.779 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:51.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.780 --rc genhtml_branch_coverage=1 00:23:51.780 --rc genhtml_function_coverage=1 00:23:51.780 --rc genhtml_legend=1 00:23:51.780 --rc geninfo_all_blocks=1 00:23:51.780 --rc geninfo_unexecuted_blocks=1 00:23:51.780 00:23:51.780 ' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:51.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.780 --rc genhtml_branch_coverage=1 00:23:51.780 --rc genhtml_function_coverage=1 00:23:51.780 --rc genhtml_legend=1 00:23:51.780 --rc geninfo_all_blocks=1 00:23:51.780 --rc geninfo_unexecuted_blocks=1 00:23:51.780 00:23:51.780 ' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:51.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.780 --rc genhtml_branch_coverage=1 00:23:51.780 --rc genhtml_function_coverage=1 00:23:51.780 --rc genhtml_legend=1 00:23:51.780 --rc geninfo_all_blocks=1 00:23:51.780 --rc geninfo_unexecuted_blocks=1 00:23:51.780 00:23:51.780 ' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:51.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.780 --rc genhtml_branch_coverage=1 00:23:51.780 --rc genhtml_function_coverage=1 00:23:51.780 --rc genhtml_legend=1 00:23:51.780 --rc geninfo_all_blocks=1 00:23:51.780 --rc geninfo_unexecuted_blocks=1 00:23:51.780 00:23:51.780 ' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.780 09:00:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:23:58.364 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:58.365 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:58.365 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:58.365 Found net devices under 0000:da:00.0: mlx_0_0 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:58.365 Found net devices under 0000:da:00.1: mlx_0_1 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # rdma_device_init 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@528 -- # allocate_nic_ips 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:58.365 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:58.365 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:23:58.365 altname enp218s0f0np0 00:23:58.365 altname ens818f0np0 00:23:58.365 inet 192.168.100.8/24 scope global mlx_0_0 00:23:58.365 valid_lft forever preferred_lft forever 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:58.365 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:58.365 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:58.365 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:23:58.365 altname enp218s0f1np1 00:23:58.365 altname ens818f1np1 00:23:58.365 inet 192.168.100.9/24 scope global mlx_0_1 00:23:58.365 valid_lft forever preferred_lft forever 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:23:58.366 192.168.100.9' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:23:58.366 192.168.100.9' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # head -n 1 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:23:58.366 192.168.100.9' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # tail -n +2 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # head -n 1 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=542107 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 542107 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 542107 ']' 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.366 [2024-11-06 09:00:20.574231] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:23:58.366 [2024-11-06 09:00:20.574285] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.366 [2024-11-06 09:00:20.652886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:58.366 [2024-11-06 09:00:20.693795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.366 [2024-11-06 09:00:20.693832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.366 [2024-11-06 09:00:20.693839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.366 [2024-11-06 09:00:20.693844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.366 [2024-11-06 09:00:20.693849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.366 [2024-11-06 09:00:20.695184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.366 [2024-11-06 09:00:20.695278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.366 [2024-11-06 09:00:20.695279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.366 [2024-11-06 09:00:20.865046] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ba0530/0x1ba4a20) succeed. 00:23:58.366 [2024-11-06 09:00:20.873909] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ba1b20/0x1be60c0) succeed. 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.366 Malloc0 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.366 09:00:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.366 [2024-11-06 09:00:21.020587] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:58.366 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:58.366 { 00:23:58.366 "params": { 00:23:58.366 "name": "Nvme$subsystem", 00:23:58.366 "trtype": "$TEST_TRANSPORT", 00:23:58.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.367 "adrfam": "ipv4", 00:23:58.367 "trsvcid": "$NVMF_PORT", 00:23:58.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.367 "hdgst": ${hdgst:-false}, 00:23:58.367 "ddgst": ${ddgst:-false} 00:23:58.367 }, 00:23:58.367 "method": "bdev_nvme_attach_controller" 00:23:58.367 } 00:23:58.367 EOF 00:23:58.367 )") 00:23:58.367 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:23:58.367 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:23:58.367 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:23:58.367 09:00:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:58.367 "params": { 00:23:58.367 "name": "Nvme1", 00:23:58.367 "trtype": "rdma", 00:23:58.367 "traddr": "192.168.100.8", 00:23:58.367 "adrfam": "ipv4", 00:23:58.367 "trsvcid": "4420", 00:23:58.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.367 "hdgst": false, 00:23:58.367 "ddgst": false 00:23:58.367 }, 00:23:58.367 "method": "bdev_nvme_attach_controller" 00:23:58.367 }' 00:23:58.367 [2024-11-06 09:00:21.070316] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:23:58.367 [2024-11-06 09:00:21.070359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542134 ] 00:23:58.367 [2024-11-06 09:00:21.146987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.367 [2024-11-06 09:00:21.188178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.367 Running I/O for 1 seconds... 00:23:59.792 17920.00 IOPS, 70.00 MiB/s 00:23:59.792 Latency(us) 00:23:59.792 [2024-11-06T08:00:22.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.792 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:59.792 Verification LBA range: start 0x0 length 0x4000 00:23:59.792 Nvme1n1 : 1.01 17930.29 70.04 0.00 0.00 7101.29 2574.63 11047.50 00:23:59.792 [2024-11-06T08:00:22.806Z] =================================================================================================================== 00:23:59.792 [2024-11-06T08:00:22.806Z] Total : 17930.29 70.04 0.00 0.00 7101.29 2574.63 11047.50 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=542371 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:59.792 { 00:23:59.792 "params": { 00:23:59.792 "name": "Nvme$subsystem", 00:23:59.792 "trtype": "$TEST_TRANSPORT", 00:23:59.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.792 "adrfam": "ipv4", 00:23:59.792 "trsvcid": "$NVMF_PORT", 00:23:59.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.792 "hdgst": ${hdgst:-false}, 00:23:59.792 "ddgst": ${ddgst:-false} 00:23:59.792 }, 00:23:59.792 "method": "bdev_nvme_attach_controller" 00:23:59.792 } 00:23:59.792 EOF 00:23:59.792 )") 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:23:59.792 09:00:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:59.792 "params": { 00:23:59.792 "name": "Nvme1", 00:23:59.792 "trtype": "rdma", 00:23:59.792 "traddr": "192.168.100.8", 00:23:59.792 "adrfam": "ipv4", 00:23:59.792 "trsvcid": "4420", 00:23:59.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.792 "hdgst": false, 00:23:59.792 "ddgst": false 00:23:59.792 }, 00:23:59.792 "method": "bdev_nvme_attach_controller" 00:23:59.792 }' 00:23:59.792 [2024-11-06 09:00:22.612413] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:23:59.792 [2024-11-06 09:00:22.612463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542371 ] 00:23:59.792 [2024-11-06 09:00:22.686535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.792 [2024-11-06 09:00:22.723987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.050 Running I/O for 15 seconds... 00:24:01.943 17920.00 IOPS, 70.00 MiB/s [2024-11-06T08:00:25.894Z] 18014.50 IOPS, 70.37 MiB/s [2024-11-06T08:00:25.894Z] 09:00:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 542107 00:24:02.880 09:00:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:03.709 16000.00 IOPS, 62.50 MiB/s [2024-11-06T08:00:26.723Z] [2024-11-06 09:00:26.607026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.709 [2024-11-06 09:00:26.607060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.607070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.709 [2024-11-06 09:00:26.607077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.607084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.709 [2024-11-06 09:00:26.607090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.607098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.709 [2024-11-06 09:00:26.607104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:03.709 [2024-11-06 09:00:26.609069] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:03.709 [2024-11-06 09:00:26.609117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.609970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.609977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.610005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.610012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.610039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.610046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.610073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.610081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.610110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.610118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.610144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.610151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.610177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.610185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.610223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.610232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.709 [2024-11-06 09:00:26.610260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.709 [2024-11-06 09:00:26.610267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.610978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.610985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.710 [2024-11-06 09:00:26.611631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.710 [2024-11-06 09:00:26.611639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.611987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.611999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.711 [2024-11-06 09:00:26.612521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.612975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.612984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.613011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182800 00:24:03.711 [2024-11-06 09:00:26.613018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.711 [2024-11-06 09:00:26.613046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.613694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182800 00:24:03.712 [2024-11-06 09:00:26.613702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2e528000 sqhd:7210 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.627531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.712 [2024-11-06 09:00:26.627575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.712 [2024-11-06 09:00:26.627599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122120 len:8 PRP1 0x0 PRP2 0x0 00:24:03.712 [2024-11-06 09:00:26.627623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.712 [2024-11-06 09:00:26.627778] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:24:03.712 [2024-11-06 09:00:26.627839] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:24:03.712 [2024-11-06 09:00:26.630942] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:03.712 [2024-11-06 09:00:26.633816] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:03.712 [2024-11-06 09:00:26.633834] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:03.712 [2024-11-06 09:00:26.633841] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed000 00:24:04.907 12000.00 IOPS, 46.88 MiB/s [2024-11-06T08:00:27.921Z] [2024-11-06 09:00:27.638696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:04.907 [2024-11-06 09:00:27.638722] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:04.907 [2024-11-06 09:00:27.638901] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:04.907 [2024-11-06 09:00:27.638911] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:04.907 [2024-11-06 09:00:27.638920] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:24:04.907 [2024-11-06 09:00:27.641617] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:04.907 [2024-11-06 09:00:27.646112] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:04.907 [2024-11-06 09:00:27.648704] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:04.907 [2024-11-06 09:00:27.648723] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:04.907 [2024-11-06 09:00:27.648730] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed000 00:24:05.734 9600.00 IOPS, 37.50 MiB/s [2024-11-06T08:00:28.748Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 542107 Killed "${NVMF_APP[@]}" "$@" 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=543295 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 543295 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 543295 ']' 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.734 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:05.734 [2024-11-06 09:00:28.632136] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:24:05.734 [2024-11-06 09:00:28.632181] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.734 [2024-11-06 09:00:28.652658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:05.734 [2024-11-06 09:00:28.652682] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:05.734 [2024-11-06 09:00:28.652860] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:05.734 [2024-11-06 09:00:28.652873] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:05.734 [2024-11-06 09:00:28.652885] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:24:05.734 [2024-11-06 09:00:28.655660] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:05.734 [2024-11-06 09:00:28.660910] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:05.734 [2024-11-06 09:00:28.663652] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:05.734 [2024-11-06 09:00:28.663673] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:05.734 [2024-11-06 09:00:28.663684] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed000 00:24:05.734 [2024-11-06 09:00:28.711338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:05.993 [2024-11-06 09:00:28.753144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.993 [2024-11-06 09:00:28.753179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.993 [2024-11-06 09:00:28.753187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.993 [2024-11-06 09:00:28.753193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.993 [2024-11-06 09:00:28.753197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.993 [2024-11-06 09:00:28.754528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.993 [2024-11-06 09:00:28.754634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.993 [2024-11-06 09:00:28.754636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.993 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.993 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:05.993 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:05.993 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:05.993 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:05.993 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.993 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:05.993 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.993 09:00:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:05.993 [2024-11-06 09:00:28.909196] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x57e530/0x582a20) succeed. 00:24:05.993 [2024-11-06 09:00:28.918095] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x57fb20/0x5c40c0) succeed. 00:24:06.252 8000.00 IOPS, 31.25 MiB/s [2024-11-06T08:00:29.266Z] 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:06.252 Malloc0 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:06.252 [2024-11-06 09:00:29.066361] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.252 09:00:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 542371 00:24:06.823 [2024-11-06 09:00:29.667643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:06.823 [2024-11-06 09:00:29.667668] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:06.823 [2024-11-06 09:00:29.667847] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:06.824 [2024-11-06 09:00:29.667858] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:06.824 [2024-11-06 09:00:29.667866] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:24:06.824 [2024-11-06 09:00:29.667885] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:24:06.824 [2024-11-06 09:00:29.670651] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:06.824 [2024-11-06 09:00:29.680934] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:06.824 [2024-11-06 09:00:29.722305] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:24:08.019 7380.86 IOPS, 28.83 MiB/s [2024-11-06T08:00:31.970Z] 8696.00 IOPS, 33.97 MiB/s [2024-11-06T08:00:33.348Z] 9724.44 IOPS, 37.99 MiB/s [2024-11-06T08:00:34.283Z] 10545.60 IOPS, 41.19 MiB/s [2024-11-06T08:00:35.217Z] 11217.18 IOPS, 43.82 MiB/s [2024-11-06T08:00:36.152Z] 11778.42 IOPS, 46.01 MiB/s [2024-11-06T08:00:37.088Z] 12251.00 IOPS, 47.86 MiB/s [2024-11-06T08:00:38.023Z] 12656.86 IOPS, 49.44 MiB/s [2024-11-06T08:00:38.023Z] 13009.20 IOPS, 50.82 MiB/s 00:24:15.009 Latency(us) 00:24:15.009 [2024-11-06T08:00:38.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.009 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:15.009 Verification LBA range: start 0x0 length 0x4000 00:24:15.009 Nvme1n1 : 15.01 13010.70 50.82 10236.16 0.00 5484.99 341.33 1054567.86 00:24:15.009 [2024-11-06T08:00:38.023Z] =================================================================================================================== 00:24:15.009 [2024-11-06T08:00:38.023Z] Total : 13010.70 50.82 10236.16 0.00 5484.99 341.33 1054567.86 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:15.267 rmmod nvme_rdma 00:24:15.267 rmmod nvme_fabrics 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 543295 ']' 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 543295 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 543295 ']' 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 543295 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 543295 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 543295' 00:24:15.267 killing process with pid 543295 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 543295 00:24:15.267 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 543295 00:24:15.526 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:15.526 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:24:15.526 00:24:15.526 real 0m24.028s 00:24:15.526 user 1m2.262s 00:24:15.526 sys 0m5.417s 00:24:15.526 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.526 09:00:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:15.526 ************************************ 00:24:15.526 END TEST nvmf_bdevperf 00:24:15.526 ************************************ 00:24:15.526 09:00:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:15.526 09:00:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:15.526 09:00:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.526 09:00:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.785 ************************************ 00:24:15.785 START TEST nvmf_target_disconnect 00:24:15.785 ************************************ 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:15.785 * Looking for test storage... 00:24:15.785 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # lcov --version 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:15.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.785 --rc genhtml_branch_coverage=1 00:24:15.785 --rc genhtml_function_coverage=1 00:24:15.785 --rc genhtml_legend=1 00:24:15.785 --rc geninfo_all_blocks=1 00:24:15.785 --rc geninfo_unexecuted_blocks=1 00:24:15.785 00:24:15.785 ' 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:15.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.785 --rc genhtml_branch_coverage=1 00:24:15.785 --rc genhtml_function_coverage=1 00:24:15.785 --rc genhtml_legend=1 00:24:15.785 --rc geninfo_all_blocks=1 00:24:15.785 --rc geninfo_unexecuted_blocks=1 00:24:15.785 00:24:15.785 ' 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:15.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.785 --rc genhtml_branch_coverage=1 00:24:15.785 --rc genhtml_function_coverage=1 00:24:15.785 --rc genhtml_legend=1 00:24:15.785 --rc geninfo_all_blocks=1 00:24:15.785 --rc geninfo_unexecuted_blocks=1 00:24:15.785 00:24:15.785 ' 00:24:15.785 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:15.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.785 --rc genhtml_branch_coverage=1 00:24:15.785 --rc genhtml_function_coverage=1 00:24:15.785 --rc genhtml_legend=1 00:24:15.785 --rc geninfo_all_blocks=1 00:24:15.785 --rc geninfo_unexecuted_blocks=1 00:24:15.785 00:24:15.785 ' 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:15.786 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.786 09:00:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:22.354 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:22.354 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:22.354 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:22.355 Found net devices under 0000:da:00.0: mlx_0_0 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:22.355 Found net devices under 0000:da:00.1: mlx_0_1 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # rdma_device_init 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@528 -- # allocate_nic_ips 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:22.355 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:22.355 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:24:22.355 altname enp218s0f0np0 00:24:22.355 altname ens818f0np0 00:24:22.355 inet 192.168.100.8/24 scope global mlx_0_0 00:24:22.355 valid_lft forever preferred_lft forever 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:22.355 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:22.355 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:24:22.355 altname enp218s0f1np1 00:24:22.355 altname ens818f1np1 00:24:22.355 inet 192.168.100.9/24 scope global mlx_0_1 00:24:22.355 valid_lft forever preferred_lft forever 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:22.355 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:24:22.356 192.168.100.9' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:24:22.356 192.168.100.9' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # head -n 1 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:24:22.356 192.168.100.9' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # tail -n +2 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # head -n 1 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:22.356 ************************************ 00:24:22.356 START TEST nvmf_target_disconnect_tc1 00:24:22.356 ************************************ 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:24:22.356 09:00:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:22.356 [2024-11-06 09:00:44.726196] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:22.356 [2024-11-06 09:00:44.726349] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:22.356 [2024-11-06 09:00:44.726373] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7000 00:24:22.923 [2024-11-06 09:00:45.730268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:24:22.923 [2024-11-06 09:00:45.730291] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:24:22.923 [2024-11-06 09:00:45.730300] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:24:22.923 [2024-11-06 09:00:45.730322] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:22.923 [2024-11-06 09:00:45.730330] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:24:22.923 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:24:22.923 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:22.923 Initializing NVMe Controllers 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:22.923 00:24:22.923 real 0m1.142s 00:24:22.923 user 0m0.923s 00:24:22.923 sys 0m0.208s 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:22.923 ************************************ 00:24:22.923 END TEST nvmf_target_disconnect_tc1 00:24:22.923 ************************************ 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:22.923 ************************************ 00:24:22.923 START TEST nvmf_target_disconnect_tc2 00:24:22.923 ************************************ 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=548263 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 548263 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 548263 ']' 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.923 09:00:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.923 [2024-11-06 09:00:45.866463] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:24:22.923 [2024-11-06 09:00:45.866508] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.183 [2024-11-06 09:00:45.945705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.183 [2024-11-06 09:00:45.987056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.183 [2024-11-06 09:00:45.987094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.183 [2024-11-06 09:00:45.987102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.183 [2024-11-06 09:00:45.987108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.183 [2024-11-06 09:00:45.987114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.183 [2024-11-06 09:00:45.988726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:23.183 [2024-11-06 09:00:45.988834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:23.183 [2024-11-06 09:00:45.988940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:23.183 [2024-11-06 09:00:45.988942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.183 Malloc0 00:24:23.183 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.184 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:23.184 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.184 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.184 [2024-11-06 09:00:46.176501] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2436be0/0x2442b40) succeed. 00:24:23.184 [2024-11-06 09:00:46.185683] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2438270/0x24841e0) succeed. 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.443 [2024-11-06 09:00:46.327991] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=548480 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:23.443 09:00:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:25.348 09:00:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 548263 00:24:25.348 09:00:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Write completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.724 starting I/O failed 00:24:26.724 Read completed with error (sct=0, sc=8) 00:24:26.725 starting I/O failed 00:24:26.725 Write completed with error (sct=0, sc=8) 00:24:26.725 starting I/O failed 00:24:26.725 Read completed with error (sct=0, sc=8) 00:24:26.725 starting I/O failed 00:24:26.725 [2024-11-06 09:00:49.528958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.661 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 548263 Killed "${NVMF_APP[@]}" "$@" 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=549131 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 549131 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 549131 ']' 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:27.661 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.661 [2024-11-06 09:00:50.405379] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:24:27.661 [2024-11-06 09:00:50.405426] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.661 [2024-11-06 09:00:50.468587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.661 [2024-11-06 09:00:50.510844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.661 [2024-11-06 09:00:50.510877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.661 [2024-11-06 09:00:50.510884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.662 [2024-11-06 09:00:50.510890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.662 [2024-11-06 09:00:50.510896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.662 [2024-11-06 09:00:50.512459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:27.662 [2024-11-06 09:00:50.512569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:27.662 [2024-11-06 09:00:50.512570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:27.662 [2024-11-06 09:00:50.512477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Write completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 Read completed with error (sct=0, sc=8) 00:24:27.662 starting I/O failed 00:24:27.662 [2024-11-06 09:00:50.534130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.662 Malloc0 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.662 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.982 [2024-11-06 09:00:50.699160] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1417be0/0x1423b40) succeed. 00:24:27.982 [2024-11-06 09:00:50.708431] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1419270/0x14651e0) succeed. 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.982 [2024-11-06 09:00:50.847873] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.982 09:00:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 548480 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Read completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Read completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Read completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Read completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Read completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Read completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Read completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Read completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Write completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 Read completed with error (sct=0, sc=8) 00:24:28.594 starting I/O failed 00:24:28.594 [2024-11-06 09:00:51.539210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.594 [2024-11-06 09:00:51.547797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.594 [2024-11-06 09:00:51.547861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.594 [2024-11-06 09:00:51.547881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.594 [2024-11-06 09:00:51.547889] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.594 [2024-11-06 09:00:51.547896] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.594 [2024-11-06 09:00:51.557835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.594 qpair failed and we were unable to recover it. 00:24:28.594 [2024-11-06 09:00:51.567782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.594 [2024-11-06 09:00:51.567826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.594 [2024-11-06 09:00:51.567847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.594 [2024-11-06 09:00:51.567855] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.594 [2024-11-06 09:00:51.567861] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.594 [2024-11-06 09:00:51.577977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.594 qpair failed and we were unable to recover it. 00:24:28.594 [2024-11-06 09:00:51.587727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.594 [2024-11-06 09:00:51.587771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.594 [2024-11-06 09:00:51.587790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.594 [2024-11-06 09:00:51.587797] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.594 [2024-11-06 09:00:51.587804] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.598110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.607791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.607834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.607851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.607858] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.607865] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.618079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.627977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.628024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.628040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.628048] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.628053] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.638079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.647911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.647958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.647974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.647981] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.647991] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.658277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.667966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.668004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.668019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.668026] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.668033] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.678113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.688069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.688114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.688129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.688137] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.688143] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.698486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.708116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.708164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.708180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.708187] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.708194] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.718363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.728171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.728216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.728233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.728240] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.728247] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.738459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.748159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.748207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.748224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.748231] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.748238] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.758430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.768314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.768355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.768371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.768379] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.768385] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.778558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.788424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.788464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.788480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.788487] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.788494] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.798647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.808302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.808340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.808357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.808364] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.808370] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.884 [2024-11-06 09:00:51.818769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.884 qpair failed and we were unable to recover it. 00:24:28.884 [2024-11-06 09:00:51.828403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.884 [2024-11-06 09:00:51.828447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.884 [2024-11-06 09:00:51.828463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.884 [2024-11-06 09:00:51.828470] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.884 [2024-11-06 09:00:51.828476] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.885 [2024-11-06 09:00:51.838632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.885 qpair failed and we were unable to recover it. 00:24:28.885 [2024-11-06 09:00:51.848411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.885 [2024-11-06 09:00:51.848453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.885 [2024-11-06 09:00:51.848469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.885 [2024-11-06 09:00:51.848477] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.885 [2024-11-06 09:00:51.848483] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.885 [2024-11-06 09:00:51.858782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.885 qpair failed and we were unable to recover it. 00:24:28.885 [2024-11-06 09:00:51.868638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.885 [2024-11-06 09:00:51.868680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.885 [2024-11-06 09:00:51.868695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.885 [2024-11-06 09:00:51.868702] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.885 [2024-11-06 09:00:51.868709] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:28.885 [2024-11-06 09:00:51.878851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:28.885 qpair failed and we were unable to recover it. 00:24:28.885 [2024-11-06 09:00:51.888674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.885 [2024-11-06 09:00:51.888715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.885 [2024-11-06 09:00:51.888731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.885 [2024-11-06 09:00:51.888739] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.885 [2024-11-06 09:00:51.888745] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.144 [2024-11-06 09:00:51.898984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.144 qpair failed and we were unable to recover it. 00:24:29.144 [2024-11-06 09:00:51.908694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.144 [2024-11-06 09:00:51.908736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.144 [2024-11-06 09:00:51.908755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.144 [2024-11-06 09:00:51.908762] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.144 [2024-11-06 09:00:51.908769] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.144 [2024-11-06 09:00:51.918899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.144 qpair failed and we were unable to recover it. 00:24:29.144 [2024-11-06 09:00:51.928762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.144 [2024-11-06 09:00:51.928801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.144 [2024-11-06 09:00:51.928817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.144 [2024-11-06 09:00:51.928825] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.144 [2024-11-06 09:00:51.928831] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.144 [2024-11-06 09:00:51.939057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.144 qpair failed and we were unable to recover it. 00:24:29.144 [2024-11-06 09:00:51.948885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.144 [2024-11-06 09:00:51.948927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.144 [2024-11-06 09:00:51.948943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.144 [2024-11-06 09:00:51.948950] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.144 [2024-11-06 09:00:51.948957] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.144 [2024-11-06 09:00:51.959100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.144 qpair failed and we were unable to recover it. 00:24:29.144 [2024-11-06 09:00:51.968852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.144 [2024-11-06 09:00:51.968896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.144 [2024-11-06 09:00:51.968911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.144 [2024-11-06 09:00:51.968919] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.144 [2024-11-06 09:00:51.968926] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.144 [2024-11-06 09:00:51.979164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.144 qpair failed and we were unable to recover it. 00:24:29.144 [2024-11-06 09:00:51.988920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.144 [2024-11-06 09:00:51.988958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.144 [2024-11-06 09:00:51.988974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.144 [2024-11-06 09:00:51.988981] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.144 [2024-11-06 09:00:51.988991] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.144 [2024-11-06 09:00:51.999224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.144 qpair failed and we were unable to recover it. 00:24:29.144 [2024-11-06 09:00:52.008942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.144 [2024-11-06 09:00:52.008983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.144 [2024-11-06 09:00:52.008999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.144 [2024-11-06 09:00:52.009006] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.144 [2024-11-06 09:00:52.009012] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.145 [2024-11-06 09:00:52.019295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.145 qpair failed and we were unable to recover it. 00:24:29.145 [2024-11-06 09:00:52.028987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.145 [2024-11-06 09:00:52.029033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.145 [2024-11-06 09:00:52.029048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.145 [2024-11-06 09:00:52.029055] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.145 [2024-11-06 09:00:52.029062] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.145 [2024-11-06 09:00:52.039208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.145 qpair failed and we were unable to recover it. 00:24:29.145 [2024-11-06 09:00:52.049101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.145 [2024-11-06 09:00:52.049138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.145 [2024-11-06 09:00:52.049154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.145 [2024-11-06 09:00:52.049161] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.145 [2024-11-06 09:00:52.049168] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.145 [2024-11-06 09:00:52.059445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.145 qpair failed and we were unable to recover it. 00:24:29.145 [2024-11-06 09:00:52.069153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.145 [2024-11-06 09:00:52.069194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.145 [2024-11-06 09:00:52.069219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.145 [2024-11-06 09:00:52.069227] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.145 [2024-11-06 09:00:52.069233] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.145 [2024-11-06 09:00:52.079456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.145 qpair failed and we were unable to recover it. 00:24:29.145 [2024-11-06 09:00:52.089240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.145 [2024-11-06 09:00:52.089279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.145 [2024-11-06 09:00:52.089295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.145 [2024-11-06 09:00:52.089302] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.145 [2024-11-06 09:00:52.089308] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.145 [2024-11-06 09:00:52.099517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.145 qpair failed and we were unable to recover it. 00:24:29.145 [2024-11-06 09:00:52.109245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.145 [2024-11-06 09:00:52.109284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.145 [2024-11-06 09:00:52.109299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.145 [2024-11-06 09:00:52.109307] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.145 [2024-11-06 09:00:52.109313] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.145 [2024-11-06 09:00:52.119697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.145 qpair failed and we were unable to recover it. 00:24:29.145 [2024-11-06 09:00:52.129433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.145 [2024-11-06 09:00:52.129476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.145 [2024-11-06 09:00:52.129492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.145 [2024-11-06 09:00:52.129499] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.145 [2024-11-06 09:00:52.129505] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.145 [2024-11-06 09:00:52.139877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.145 qpair failed and we were unable to recover it. 00:24:29.145 [2024-11-06 09:00:52.149412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.145 [2024-11-06 09:00:52.149453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.145 [2024-11-06 09:00:52.149469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.145 [2024-11-06 09:00:52.149476] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.145 [2024-11-06 09:00:52.149482] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.404 [2024-11-06 09:00:52.159761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-11-06 09:00:52.169442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.404 [2024-11-06 09:00:52.169485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.404 [2024-11-06 09:00:52.169501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.404 [2024-11-06 09:00:52.169508] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.404 [2024-11-06 09:00:52.169515] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.404 [2024-11-06 09:00:52.179870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-11-06 09:00:52.189655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.404 [2024-11-06 09:00:52.189698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.404 [2024-11-06 09:00:52.189713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.404 [2024-11-06 09:00:52.189721] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.404 [2024-11-06 09:00:52.189727] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.404 [2024-11-06 09:00:52.199946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-11-06 09:00:52.209696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.404 [2024-11-06 09:00:52.209741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.404 [2024-11-06 09:00:52.209756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.404 [2024-11-06 09:00:52.209764] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.404 [2024-11-06 09:00:52.209770] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.404 [2024-11-06 09:00:52.220009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-11-06 09:00:52.229674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.404 [2024-11-06 09:00:52.229718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.404 [2024-11-06 09:00:52.229733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.404 [2024-11-06 09:00:52.229741] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.404 [2024-11-06 09:00:52.229747] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.404 [2024-11-06 09:00:52.240039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-11-06 09:00:52.249633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.404 [2024-11-06 09:00:52.249674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.405 [2024-11-06 09:00:52.249690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.405 [2024-11-06 09:00:52.249701] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.405 [2024-11-06 09:00:52.249708] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.405 [2024-11-06 09:00:52.260010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-11-06 09:00:52.269724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.405 [2024-11-06 09:00:52.269773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.405 [2024-11-06 09:00:52.269789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.405 [2024-11-06 09:00:52.269797] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.405 [2024-11-06 09:00:52.269804] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.405 [2024-11-06 09:00:52.280174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-11-06 09:00:52.289830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.405 [2024-11-06 09:00:52.289872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.405 [2024-11-06 09:00:52.289888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.405 [2024-11-06 09:00:52.289895] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.405 [2024-11-06 09:00:52.289901] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.405 [2024-11-06 09:00:52.300176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-11-06 09:00:52.309850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.405 [2024-11-06 09:00:52.309890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.405 [2024-11-06 09:00:52.309906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.405 [2024-11-06 09:00:52.309913] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.405 [2024-11-06 09:00:52.309920] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.405 [2024-11-06 09:00:52.320252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-11-06 09:00:52.330164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.405 [2024-11-06 09:00:52.330211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.405 [2024-11-06 09:00:52.330227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.405 [2024-11-06 09:00:52.330234] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.405 [2024-11-06 09:00:52.330241] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.405 [2024-11-06 09:00:52.340377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-11-06 09:00:52.350176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.405 [2024-11-06 09:00:52.350222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.405 [2024-11-06 09:00:52.350238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.405 [2024-11-06 09:00:52.350245] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.405 [2024-11-06 09:00:52.350252] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.405 [2024-11-06 09:00:52.360543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-11-06 09:00:52.370262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.405 [2024-11-06 09:00:52.370307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.405 [2024-11-06 09:00:52.370322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.405 [2024-11-06 09:00:52.370330] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.405 [2024-11-06 09:00:52.370336] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.405 [2024-11-06 09:00:52.380481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-11-06 09:00:52.390094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.405 [2024-11-06 09:00:52.390137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.405 [2024-11-06 09:00:52.390153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.405 [2024-11-06 09:00:52.390160] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.405 [2024-11-06 09:00:52.390166] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.405 [2024-11-06 09:00:52.400574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-11-06 09:00:52.410260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.405 [2024-11-06 09:00:52.410305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.405 [2024-11-06 09:00:52.410321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.405 [2024-11-06 09:00:52.410328] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.405 [2024-11-06 09:00:52.410334] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.664 [2024-11-06 09:00:52.420638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.664 qpair failed and we were unable to recover it. 00:24:29.664 [2024-11-06 09:00:52.430373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.664 [2024-11-06 09:00:52.430416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.664 [2024-11-06 09:00:52.430432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.664 [2024-11-06 09:00:52.430439] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.664 [2024-11-06 09:00:52.430446] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.664 [2024-11-06 09:00:52.440589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.664 qpair failed and we were unable to recover it. 00:24:29.664 [2024-11-06 09:00:52.450423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.664 [2024-11-06 09:00:52.450462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.664 [2024-11-06 09:00:52.450477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.664 [2024-11-06 09:00:52.450485] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.664 [2024-11-06 09:00:52.450491] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.460671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.470407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.470444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.470460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.470467] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.470474] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.480893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.490556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.490596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.490612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.490620] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.490626] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.500831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.510534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.510577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.510596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.510603] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.510609] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.520918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.530704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.530740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.530756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.530763] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.530770] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.540885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.550553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.550588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.550603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.550611] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.550618] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.560938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.570781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.570821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.570836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.570844] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.570851] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.581166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.590844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.590887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.590903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.590914] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.590921] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.601137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.610928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.610971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.610986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.610994] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.611000] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.621068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.630912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.630953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.630968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.630976] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.630982] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.641135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.650988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.651028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.651045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.651052] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.651059] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.665 [2024-11-06 09:00:52.661409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.665 qpair failed and we were unable to recover it. 00:24:29.665 [2024-11-06 09:00:52.671124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.665 [2024-11-06 09:00:52.671167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.665 [2024-11-06 09:00:52.671183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.665 [2024-11-06 09:00:52.671190] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.665 [2024-11-06 09:00:52.671197] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.924 [2024-11-06 09:00:52.681281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.924 qpair failed and we were unable to recover it. 00:24:29.924 [2024-11-06 09:00:52.691081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.924 [2024-11-06 09:00:52.691122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.924 [2024-11-06 09:00:52.691138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.924 [2024-11-06 09:00:52.691146] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.924 [2024-11-06 09:00:52.691152] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.924 [2024-11-06 09:00:52.701419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.711165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.711210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.711226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.711233] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.711240] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.721392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.731191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.731234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.731250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.731258] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.731264] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.741589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.751261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.751306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.751322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.751330] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.751336] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.761781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.771400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.771445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.771460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.771468] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.771474] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.781651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.791406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.791443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.791459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.791467] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.791473] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.801661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.811425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.811468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.811485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.811492] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.811499] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.821859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.831437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.831480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.831496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.831503] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.831509] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.841863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.851508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.851551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.851570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.851577] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.851584] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.861839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.871612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.871648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.871664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.871671] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.871677] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.881766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.891741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.891783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.891800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.891807] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.891813] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.902148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.911810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.911848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.911863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.911871] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.911877] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:29.925 [2024-11-06 09:00:52.922031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:29.925 qpair failed and we were unable to recover it. 00:24:29.925 [2024-11-06 09:00:52.931790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.925 [2024-11-06 09:00:52.931830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.925 [2024-11-06 09:00:52.931846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.925 [2024-11-06 09:00:52.931856] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.925 [2024-11-06 09:00:52.931863] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.184 [2024-11-06 09:00:52.942039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.184 qpair failed and we were unable to recover it. 00:24:30.184 [2024-11-06 09:00:52.951815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.184 [2024-11-06 09:00:52.951857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.184 [2024-11-06 09:00:52.951873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.184 [2024-11-06 09:00:52.951880] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.184 [2024-11-06 09:00:52.951887] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:52.961858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:52.971860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:52.971902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:52.971918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:52.971927] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:52.971933] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:52.982088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:52.991887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:52.991933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:52.991948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:52.991956] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:52.991962] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.002267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.012014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.012061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.012076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:53.012083] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:53.012089] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.022248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.032017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.032055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.032070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:53.032078] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:53.032084] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.042237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.052036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.052079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.052095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:53.052103] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:53.052110] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.062454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.072149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.072193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.072216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:53.072224] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:53.072231] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.082300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.092259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.092303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.092318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:53.092326] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:53.092333] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.102663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.112198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.112239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.112255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:53.112262] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:53.112269] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.122726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.132344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.132386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.132401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:53.132409] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:53.132415] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.142728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.152341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.152387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.152403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:53.152411] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:53.152417] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.162631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.172326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.172367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.172383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.185 [2024-11-06 09:00:53.172390] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.185 [2024-11-06 09:00:53.172397] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.185 [2024-11-06 09:00:53.182676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.185 qpair failed and we were unable to recover it. 00:24:30.185 [2024-11-06 09:00:53.192508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.185 [2024-11-06 09:00:53.192548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.185 [2024-11-06 09:00:53.192567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.186 [2024-11-06 09:00:53.192574] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.186 [2024-11-06 09:00:53.192580] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.445 [2024-11-06 09:00:53.202572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.445 qpair failed and we were unable to recover it. 00:24:30.445 [2024-11-06 09:00:53.212525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.445 [2024-11-06 09:00:53.212566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.445 [2024-11-06 09:00:53.212582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.445 [2024-11-06 09:00:53.212589] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.445 [2024-11-06 09:00:53.212596] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.445 [2024-11-06 09:00:53.222768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.445 qpair failed and we were unable to recover it. 00:24:30.445 [2024-11-06 09:00:53.232524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.445 [2024-11-06 09:00:53.232565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.445 [2024-11-06 09:00:53.232581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.445 [2024-11-06 09:00:53.232588] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.445 [2024-11-06 09:00:53.232595] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.445 [2024-11-06 09:00:53.242936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.445 qpair failed and we were unable to recover it. 00:24:30.445 [2024-11-06 09:00:53.252598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.445 [2024-11-06 09:00:53.252638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.445 [2024-11-06 09:00:53.252654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.445 [2024-11-06 09:00:53.252662] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.445 [2024-11-06 09:00:53.252668] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.445 [2024-11-06 09:00:53.263245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.445 qpair failed and we were unable to recover it. 00:24:30.445 [2024-11-06 09:00:53.272750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.445 [2024-11-06 09:00:53.272793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.445 [2024-11-06 09:00:53.272809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.445 [2024-11-06 09:00:53.272817] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.445 [2024-11-06 09:00:53.272827] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.445 [2024-11-06 09:00:53.283125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.446 qpair failed and we were unable to recover it. 00:24:30.446 [2024-11-06 09:00:53.292717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.446 [2024-11-06 09:00:53.292758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.446 [2024-11-06 09:00:53.292774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.446 [2024-11-06 09:00:53.292781] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.446 [2024-11-06 09:00:53.292788] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.446 [2024-11-06 09:00:53.302861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.446 qpair failed and we were unable to recover it. 00:24:30.446 [2024-11-06 09:00:53.312721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.446 [2024-11-06 09:00:53.312766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.446 [2024-11-06 09:00:53.312783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.446 [2024-11-06 09:00:53.312791] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.446 [2024-11-06 09:00:53.312798] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.446 [2024-11-06 09:00:53.323167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.446 qpair failed and we were unable to recover it. 00:24:30.446 [2024-11-06 09:00:53.332847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.446 [2024-11-06 09:00:53.332887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.446 [2024-11-06 09:00:53.332903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.446 [2024-11-06 09:00:53.332911] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.446 [2024-11-06 09:00:53.332918] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.446 [2024-11-06 09:00:53.343141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.446 qpair failed and we were unable to recover it. 00:24:30.446 [2024-11-06 09:00:53.352929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.446 [2024-11-06 09:00:53.352965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.446 [2024-11-06 09:00:53.352981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.446 [2024-11-06 09:00:53.352988] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.446 [2024-11-06 09:00:53.352994] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.446 [2024-11-06 09:00:53.363273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.446 qpair failed and we were unable to recover it. 00:24:30.446 [2024-11-06 09:00:53.373007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.446 [2024-11-06 09:00:53.373048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.446 [2024-11-06 09:00:53.373064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.446 [2024-11-06 09:00:53.373071] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.446 [2024-11-06 09:00:53.373077] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.446 [2024-11-06 09:00:53.383192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.446 qpair failed and we were unable to recover it. 00:24:30.446 [2024-11-06 09:00:53.393101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.446 [2024-11-06 09:00:53.393144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.446 [2024-11-06 09:00:53.393160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.446 [2024-11-06 09:00:53.393167] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.446 [2024-11-06 09:00:53.393174] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.446 [2024-11-06 09:00:53.403411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.446 qpair failed and we were unable to recover it. 00:24:30.446 [2024-11-06 09:00:53.413133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.446 [2024-11-06 09:00:53.413175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.446 [2024-11-06 09:00:53.413191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.446 [2024-11-06 09:00:53.413198] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.446 [2024-11-06 09:00:53.413209] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.446 [2024-11-06 09:00:53.423439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.446 qpair failed and we were unable to recover it. 00:24:30.446 [2024-11-06 09:00:53.433175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.446 [2024-11-06 09:00:53.433217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.446 [2024-11-06 09:00:53.433233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.446 [2024-11-06 09:00:53.433240] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.446 [2024-11-06 09:00:53.433247] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.446 [2024-11-06 09:00:53.443511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.446 qpair failed and we were unable to recover it. 00:24:30.446 [2024-11-06 09:00:53.453269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.446 [2024-11-06 09:00:53.453314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.446 [2024-11-06 09:00:53.453330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.446 [2024-11-06 09:00:53.453336] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.446 [2024-11-06 09:00:53.453343] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.463747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.473252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.473294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.473310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.473317] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.473323] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.483450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.493326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.493363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.493379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.493386] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.493393] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.503682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.513400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.513438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.513454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.513461] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.513468] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.523763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.533517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.533560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.533579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.533587] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.533594] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.543718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.553545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.553590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.553606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.553613] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.553620] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.563764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.573563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.573607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.573625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.573633] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.573639] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.583929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.593676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.593714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.593731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.593738] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.593744] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.604029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.613679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.613721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.613738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.613746] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.613755] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.624104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.633819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.633864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.633880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.633888] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.633895] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.643969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.653916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.653961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.653977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.653985] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.706 [2024-11-06 09:00:53.653992] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.706 [2024-11-06 09:00:53.664166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.706 qpair failed and we were unable to recover it. 00:24:30.706 [2024-11-06 09:00:53.673883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.706 [2024-11-06 09:00:53.673927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.706 [2024-11-06 09:00:53.673942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.706 [2024-11-06 09:00:53.673950] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.707 [2024-11-06 09:00:53.673957] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.707 [2024-11-06 09:00:53.684050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.707 qpair failed and we were unable to recover it. 00:24:30.707 [2024-11-06 09:00:53.694028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.707 [2024-11-06 09:00:53.694068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.707 [2024-11-06 09:00:53.694083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.707 [2024-11-06 09:00:53.694091] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.707 [2024-11-06 09:00:53.694097] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.707 [2024-11-06 09:00:53.704241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.707 qpair failed and we were unable to recover it. 00:24:30.707 [2024-11-06 09:00:53.714054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.707 [2024-11-06 09:00:53.714097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.707 [2024-11-06 09:00:53.714112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.707 [2024-11-06 09:00:53.714119] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.707 [2024-11-06 09:00:53.714126] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.724431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.734125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.734163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.734179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.734187] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.734193] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.744360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.754145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.754181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.754198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.754210] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.754216] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.764682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.774213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.774256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.774272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.774279] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.774285] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.784416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.794289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.794337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.794354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.794361] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.794367] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.804656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.814442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.814484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.814500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.814508] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.814514] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.824762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.834451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.834494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.834511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.834518] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.834524] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.844668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.854407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.854450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.854466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.854473] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.854479] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.864864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.874455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.874491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.874507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.874518] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.874524] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.884788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.894597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.894638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.894655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.894662] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.894669] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.904914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.914546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.914588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.914604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.914611] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.914618] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.924912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.967 qpair failed and we were unable to recover it. 00:24:30.967 [2024-11-06 09:00:53.934715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.967 [2024-11-06 09:00:53.934756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.967 [2024-11-06 09:00:53.934773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.967 [2024-11-06 09:00:53.934780] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.967 [2024-11-06 09:00:53.934787] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.967 [2024-11-06 09:00:53.945073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.968 qpair failed and we were unable to recover it. 00:24:30.968 [2024-11-06 09:00:53.954751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.968 [2024-11-06 09:00:53.954798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.968 [2024-11-06 09:00:53.954814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.968 [2024-11-06 09:00:53.954822] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.968 [2024-11-06 09:00:53.954832] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:30.968 [2024-11-06 09:00:53.965109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.968 qpair failed and we were unable to recover it. 00:24:30.968 [2024-11-06 09:00:53.974790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.968 [2024-11-06 09:00:53.974827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.968 [2024-11-06 09:00:53.974843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.968 [2024-11-06 09:00:53.974850] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.968 [2024-11-06 09:00:53.974856] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.227 [2024-11-06 09:00:53.985055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.227 qpair failed and we were unable to recover it. 00:24:31.227 [2024-11-06 09:00:53.994844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.227 [2024-11-06 09:00:53.994879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.227 [2024-11-06 09:00:53.994895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.227 [2024-11-06 09:00:53.994903] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.227 [2024-11-06 09:00:53.994910] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.227 [2024-11-06 09:00:54.005224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.227 qpair failed and we were unable to recover it. 00:24:31.227 [2024-11-06 09:00:54.015024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.227 [2024-11-06 09:00:54.015066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.227 [2024-11-06 09:00:54.015081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.227 [2024-11-06 09:00:54.015088] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.227 [2024-11-06 09:00:54.015095] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.227 [2024-11-06 09:00:54.025313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.227 qpair failed and we were unable to recover it. 00:24:31.227 [2024-11-06 09:00:54.035000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.227 [2024-11-06 09:00:54.035044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.227 [2024-11-06 09:00:54.035060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.227 [2024-11-06 09:00:54.035067] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.227 [2024-11-06 09:00:54.035073] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.227 [2024-11-06 09:00:54.045428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.227 qpair failed and we were unable to recover it. 00:24:31.227 [2024-11-06 09:00:54.055148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.227 [2024-11-06 09:00:54.055191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.227 [2024-11-06 09:00:54.055211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.227 [2024-11-06 09:00:54.055219] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.227 [2024-11-06 09:00:54.055226] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.227 [2024-11-06 09:00:54.065555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.227 qpair failed and we were unable to recover it. 00:24:31.227 [2024-11-06 09:00:54.075104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.227 [2024-11-06 09:00:54.075143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.227 [2024-11-06 09:00:54.075159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.227 [2024-11-06 09:00:54.075166] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.227 [2024-11-06 09:00:54.075173] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.227 [2024-11-06 09:00:54.085320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.227 qpair failed and we were unable to recover it. 00:24:31.227 [2024-11-06 09:00:54.095255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.227 [2024-11-06 09:00:54.095298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.227 [2024-11-06 09:00:54.095314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.228 [2024-11-06 09:00:54.095321] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.228 [2024-11-06 09:00:54.095328] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.228 [2024-11-06 09:00:54.105595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.228 qpair failed and we were unable to recover it. 00:24:31.228 [2024-11-06 09:00:54.115300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.228 [2024-11-06 09:00:54.115346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.228 [2024-11-06 09:00:54.115362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.228 [2024-11-06 09:00:54.115370] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.228 [2024-11-06 09:00:54.115376] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.228 [2024-11-06 09:00:54.125650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.228 qpair failed and we were unable to recover it. 00:24:31.228 [2024-11-06 09:00:54.135222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.228 [2024-11-06 09:00:54.135264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.228 [2024-11-06 09:00:54.135283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.228 [2024-11-06 09:00:54.135290] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.228 [2024-11-06 09:00:54.135297] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.228 [2024-11-06 09:00:54.145805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.228 qpair failed and we were unable to recover it. 00:24:31.228 [2024-11-06 09:00:54.155413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.228 [2024-11-06 09:00:54.155452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.228 [2024-11-06 09:00:54.155468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.228 [2024-11-06 09:00:54.155475] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.228 [2024-11-06 09:00:54.155482] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.228 [2024-11-06 09:00:54.165615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.228 qpair failed and we were unable to recover it. 00:24:31.228 [2024-11-06 09:00:54.175554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.228 [2024-11-06 09:00:54.175594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.228 [2024-11-06 09:00:54.175609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.228 [2024-11-06 09:00:54.175617] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.228 [2024-11-06 09:00:54.175623] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.228 [2024-11-06 09:00:54.185756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.228 qpair failed and we were unable to recover it. 00:24:31.228 [2024-11-06 09:00:54.195666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.228 [2024-11-06 09:00:54.195712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.228 [2024-11-06 09:00:54.195728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.228 [2024-11-06 09:00:54.195735] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.228 [2024-11-06 09:00:54.195742] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.228 [2024-11-06 09:00:54.205901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.228 qpair failed and we were unable to recover it. 00:24:31.228 [2024-11-06 09:00:54.215676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.228 [2024-11-06 09:00:54.215720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.228 [2024-11-06 09:00:54.215735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.228 [2024-11-06 09:00:54.215749] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.228 [2024-11-06 09:00:54.215756] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.228 [2024-11-06 09:00:54.226000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.228 qpair failed and we were unable to recover it. 00:24:31.228 [2024-11-06 09:00:54.235683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.228 [2024-11-06 09:00:54.235719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.228 [2024-11-06 09:00:54.235735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.228 [2024-11-06 09:00:54.235743] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.228 [2024-11-06 09:00:54.235749] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.488 [2024-11-06 09:00:54.245942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.488 qpair failed and we were unable to recover it. 00:24:31.488 [2024-11-06 09:00:54.255752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.488 [2024-11-06 09:00:54.255792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.488 [2024-11-06 09:00:54.255808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.488 [2024-11-06 09:00:54.255815] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.488 [2024-11-06 09:00:54.255823] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.488 [2024-11-06 09:00:54.265997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.488 qpair failed and we were unable to recover it. 00:24:31.488 [2024-11-06 09:00:54.275806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.488 [2024-11-06 09:00:54.275850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.488 [2024-11-06 09:00:54.275866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.488 [2024-11-06 09:00:54.275874] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.488 [2024-11-06 09:00:54.275880] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.488 [2024-11-06 09:00:54.286139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.488 qpair failed and we were unable to recover it. 00:24:31.488 [2024-11-06 09:00:54.295847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.488 [2024-11-06 09:00:54.295889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.488 [2024-11-06 09:00:54.295905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.488 [2024-11-06 09:00:54.295912] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.488 [2024-11-06 09:00:54.295919] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.488 [2024-11-06 09:00:54.306129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.488 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.315894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.315938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.315954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.315962] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.315968] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.489 [2024-11-06 09:00:54.326119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.489 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.335909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.335951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.335966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.335974] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.335981] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.489 [2024-11-06 09:00:54.346258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.489 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.355967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.356009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.356024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.356032] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.356038] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.489 [2024-11-06 09:00:54.366143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.489 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.376094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.376138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.376155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.376162] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.376168] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.489 [2024-11-06 09:00:54.386424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.489 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.396153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.396194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.396218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.396226] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.396233] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.489 [2024-11-06 09:00:54.406384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.489 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.416067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.416110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.416127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.416134] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.416141] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.489 [2024-11-06 09:00:54.426313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.489 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.436256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.436300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.436316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.436323] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.436330] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.489 [2024-11-06 09:00:54.446535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.489 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.456282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.456321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.456337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.456344] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.456350] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.489 [2024-11-06 09:00:54.466619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.489 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.476335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.476378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.476397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.476405] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.476411] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.489 [2024-11-06 09:00:54.486711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.489 qpair failed and we were unable to recover it. 00:24:31.489 [2024-11-06 09:00:54.496314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.489 [2024-11-06 09:00:54.496353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.489 [2024-11-06 09:00:54.496370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.489 [2024-11-06 09:00:54.496377] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.489 [2024-11-06 09:00:54.496383] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.506654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.516491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.516533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.516549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.516557] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.516564] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.526790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.536491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.536532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.536547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.536554] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.536561] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.546769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.556589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.556625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.556641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.556653] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.556659] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.566950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.576665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.576707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.576724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.576731] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.576738] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.586821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.596707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.596749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.596764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.596773] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.596779] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.607111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.616855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.616894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.616910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.616917] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.616924] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.627100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.636829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.636868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.636884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.636892] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.636898] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.647229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.656831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.656871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.656887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.656894] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.656901] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.667126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.676912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.676962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.676978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.676985] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.676992] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.687208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.696915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.696952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.696968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.696975] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.696982] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.707250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.716961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.717005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.717021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.717028] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.717035] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.727307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.737046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.737088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.737104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.737111] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.737118] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:31.749 [2024-11-06 09:00:54.747465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.749 qpair failed and we were unable to recover it. 00:24:31.749 [2024-11-06 09:00:54.757140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.749 [2024-11-06 09:00:54.757179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.749 [2024-11-06 09:00:54.757195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.749 [2024-11-06 09:00:54.757208] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.749 [2024-11-06 09:00:54.757215] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.767484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.777200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.777244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.777260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.777268] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.777274] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.787541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.797263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.797300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.797316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.797324] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.797330] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.807683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.817284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.817327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.817347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.817354] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.817361] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.827601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.837399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.837446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.837462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.837470] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.837476] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.847757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.857417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.857459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.857475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.857482] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.857489] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.867868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.878484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.878527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.878543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.878550] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.878557] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.887872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.897616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.897656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.897671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.897679] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.897688] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.907996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.917708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.917746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.917762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.917770] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.917776] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.928020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.937749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.937788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.937804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.937811] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.937817] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.009 [2024-11-06 09:00:54.948024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.009 qpair failed and we were unable to recover it. 00:24:32.009 [2024-11-06 09:00:54.957756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.009 [2024-11-06 09:00:54.957799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.009 [2024-11-06 09:00:54.957814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.009 [2024-11-06 09:00:54.957822] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.009 [2024-11-06 09:00:54.957828] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.010 [2024-11-06 09:00:54.968253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.010 qpair failed and we were unable to recover it. 00:24:32.010 [2024-11-06 09:00:54.977853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.010 [2024-11-06 09:00:54.977896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.010 [2024-11-06 09:00:54.977913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.010 [2024-11-06 09:00:54.977920] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.010 [2024-11-06 09:00:54.977927] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.010 [2024-11-06 09:00:54.988267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.010 qpair failed and we were unable to recover it. 00:24:32.010 [2024-11-06 09:00:54.997878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.010 [2024-11-06 09:00:54.997918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.010 [2024-11-06 09:00:54.997934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.010 [2024-11-06 09:00:54.997942] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.010 [2024-11-06 09:00:54.997948] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.010 [2024-11-06 09:00:55.008173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.010 qpair failed and we were unable to recover it. 00:24:32.010 [2024-11-06 09:00:55.017980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.010 [2024-11-06 09:00:55.018018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.010 [2024-11-06 09:00:55.018034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.010 [2024-11-06 09:00:55.018041] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.010 [2024-11-06 09:00:55.018048] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.269 [2024-11-06 09:00:55.028128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.269 qpair failed and we were unable to recover it. 00:24:32.269 [2024-11-06 09:00:55.037936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.269 [2024-11-06 09:00:55.037976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.269 [2024-11-06 09:00:55.037992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.269 [2024-11-06 09:00:55.038000] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.269 [2024-11-06 09:00:55.038008] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.269 [2024-11-06 09:00:55.048367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.269 qpair failed and we were unable to recover it. 00:24:32.269 [2024-11-06 09:00:55.057977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.269 [2024-11-06 09:00:55.058017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.269 [2024-11-06 09:00:55.058032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.269 [2024-11-06 09:00:55.058040] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.269 [2024-11-06 09:00:55.058047] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.269 [2024-11-06 09:00:55.068326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.269 qpair failed and we were unable to recover it. 00:24:32.269 [2024-11-06 09:00:55.078066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.269 [2024-11-06 09:00:55.078116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.269 [2024-11-06 09:00:55.078132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.269 [2024-11-06 09:00:55.078140] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.078146] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.088301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.098135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.098176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.098192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.098199] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.098211] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.108358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.118160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.118200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.118220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.118228] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.118234] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.128419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.138377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.138419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.138434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.138442] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.138448] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.148462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.158380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.158423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.158442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.158449] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.158456] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.168515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.178363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.178407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.178423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.178431] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.178437] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.188716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.198479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.198515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.198531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.198538] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.198545] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.208622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.218515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.218555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.218571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.218578] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.218585] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.228856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.238556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.238602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.238618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.238626] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.238636] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.248991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.258774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.258811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.258828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.258835] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.258842] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.270 [2024-11-06 09:00:55.269025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.270 qpair failed and we were unable to recover it. 00:24:32.270 [2024-11-06 09:00:55.278678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.270 [2024-11-06 09:00:55.278721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.270 [2024-11-06 09:00:55.278737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.270 [2024-11-06 09:00:55.278744] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.270 [2024-11-06 09:00:55.278750] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.288941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.298777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.298819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.298834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.298842] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.298849] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.309129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.318775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.318816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.318831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.318838] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.318845] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.329134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.338902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.338946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.338963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.338970] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.338977] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.348982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.358856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.358895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.358913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.358920] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.358927] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.369174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.378920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.378963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.378979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.378986] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.378993] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.389120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.398963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.399008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.399023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.399031] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.399038] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.409435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.419119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.419159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.419176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.419183] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.419189] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.429290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.439099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.439141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.439157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.439164] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.439171] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.449434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.459265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.459306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.459321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.459329] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.459337] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.469607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.479221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.479261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.530 [2024-11-06 09:00:55.479277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.530 [2024-11-06 09:00:55.479285] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.530 [2024-11-06 09:00:55.479291] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.530 [2024-11-06 09:00:55.489710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.530 qpair failed and we were unable to recover it. 00:24:32.530 [2024-11-06 09:00:55.499408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.530 [2024-11-06 09:00:55.499452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.531 [2024-11-06 09:00:55.499472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.531 [2024-11-06 09:00:55.499479] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.531 [2024-11-06 09:00:55.499486] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.531 [2024-11-06 09:00:55.509598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.531 qpair failed and we were unable to recover it. 00:24:32.531 [2024-11-06 09:00:55.519419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.531 [2024-11-06 09:00:55.519458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.531 [2024-11-06 09:00:55.519474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.531 [2024-11-06 09:00:55.519481] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.531 [2024-11-06 09:00:55.519487] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.531 [2024-11-06 09:00:55.529764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.531 qpair failed and we were unable to recover it. 00:24:32.531 [2024-11-06 09:00:55.539520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.531 [2024-11-06 09:00:55.539560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.531 [2024-11-06 09:00:55.539575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.531 [2024-11-06 09:00:55.539582] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.531 [2024-11-06 09:00:55.539589] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.549718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.559463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.559505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.559521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.559529] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.559536] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.569812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.579607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.579653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.579671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.579679] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.579689] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.589960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.599655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.599694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.599710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.599717] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.599724] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.610143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.619673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.619715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.619731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.619738] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.619745] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.629903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.639891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.639932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.639948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.639955] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.639961] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.650064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.659827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.659862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.659878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.659885] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.659892] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.670216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.679851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.679887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.679903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.679910] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.679916] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.690033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.700092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.700133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.700149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.700156] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.700162] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.710240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.720010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.720058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.720074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.720081] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.720088] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.730258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.740034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.740080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.740096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.740102] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.740109] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.750356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.760048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.760094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.790 [2024-11-06 09:00:55.760114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.790 [2024-11-06 09:00:55.760121] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.790 [2024-11-06 09:00:55.760128] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.790 [2024-11-06 09:00:55.770462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.790 qpair failed and we were unable to recover it. 00:24:32.790 [2024-11-06 09:00:55.780264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.790 [2024-11-06 09:00:55.780307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.791 [2024-11-06 09:00:55.780323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.791 [2024-11-06 09:00:55.780330] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.791 [2024-11-06 09:00:55.780337] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:32.791 [2024-11-06 09:00:55.790546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.791 qpair failed and we were unable to recover it. 00:24:32.791 [2024-11-06 09:00:55.800272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.791 [2024-11-06 09:00:55.800320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.791 [2024-11-06 09:00:55.800336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.791 [2024-11-06 09:00:55.800343] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.791 [2024-11-06 09:00:55.800350] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.050 [2024-11-06 09:00:55.810511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.050 qpair failed and we were unable to recover it. 00:24:33.050 [2024-11-06 09:00:55.820423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.050 [2024-11-06 09:00:55.820467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.050 [2024-11-06 09:00:55.820483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.050 [2024-11-06 09:00:55.820490] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.050 [2024-11-06 09:00:55.820497] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.050 [2024-11-06 09:00:55.830650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.050 qpair failed and we were unable to recover it. 00:24:33.050 [2024-11-06 09:00:55.840427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.050 [2024-11-06 09:00:55.840470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.050 [2024-11-06 09:00:55.840486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.050 [2024-11-06 09:00:55.840497] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.050 [2024-11-06 09:00:55.840503] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.050 [2024-11-06 09:00:55.850654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.050 qpair failed and we were unable to recover it. 00:24:33.050 [2024-11-06 09:00:55.860469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.050 [2024-11-06 09:00:55.860510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.050 [2024-11-06 09:00:55.860525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.050 [2024-11-06 09:00:55.860533] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.050 [2024-11-06 09:00:55.860539] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.050 [2024-11-06 09:00:55.870792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.050 qpair failed and we were unable to recover it. 00:24:33.050 [2024-11-06 09:00:55.880557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.050 [2024-11-06 09:00:55.880599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.050 [2024-11-06 09:00:55.880614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.050 [2024-11-06 09:00:55.880622] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.050 [2024-11-06 09:00:55.880628] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.050 [2024-11-06 09:00:55.890957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.050 qpair failed and we were unable to recover it. 00:24:33.050 [2024-11-06 09:00:55.900638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.050 [2024-11-06 09:00:55.900678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.050 [2024-11-06 09:00:55.900694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.050 [2024-11-06 09:00:55.900701] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.050 [2024-11-06 09:00:55.900708] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.050 [2024-11-06 09:00:55.910860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.050 qpair failed and we were unable to recover it. 00:24:33.050 [2024-11-06 09:00:55.920724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.050 [2024-11-06 09:00:55.920760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.051 [2024-11-06 09:00:55.920776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.051 [2024-11-06 09:00:55.920783] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.051 [2024-11-06 09:00:55.920790] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.051 [2024-11-06 09:00:55.931088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.051 qpair failed and we were unable to recover it. 00:24:33.051 [2024-11-06 09:00:55.940767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.051 [2024-11-06 09:00:55.940809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.051 [2024-11-06 09:00:55.940825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.051 [2024-11-06 09:00:55.940832] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.051 [2024-11-06 09:00:55.940839] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.051 [2024-11-06 09:00:55.951016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.051 qpair failed and we were unable to recover it. 00:24:33.051 [2024-11-06 09:00:55.960865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.051 [2024-11-06 09:00:55.960906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.051 [2024-11-06 09:00:55.960921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.051 [2024-11-06 09:00:55.960930] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.051 [2024-11-06 09:00:55.960936] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.051 [2024-11-06 09:00:55.971058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.051 qpair failed and we were unable to recover it. 00:24:33.051 [2024-11-06 09:00:55.980854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.051 [2024-11-06 09:00:55.980895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.051 [2024-11-06 09:00:55.980911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.051 [2024-11-06 09:00:55.980918] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.051 [2024-11-06 09:00:55.980925] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.051 [2024-11-06 09:00:55.990898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.051 qpair failed and we were unable to recover it. 00:24:33.051 [2024-11-06 09:00:56.000891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.051 [2024-11-06 09:00:56.000936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.051 [2024-11-06 09:00:56.000952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.051 [2024-11-06 09:00:56.000960] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.051 [2024-11-06 09:00:56.000966] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.051 [2024-11-06 09:00:56.011079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.051 qpair failed and we were unable to recover it. 00:24:33.051 [2024-11-06 09:00:56.021022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.051 [2024-11-06 09:00:56.021064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.051 [2024-11-06 09:00:56.021079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.051 [2024-11-06 09:00:56.021086] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.051 [2024-11-06 09:00:56.021092] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.051 [2024-11-06 09:00:56.031310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.051 qpair failed and we were unable to recover it. 00:24:33.051 [2024-11-06 09:00:56.040967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.051 [2024-11-06 09:00:56.041011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.051 [2024-11-06 09:00:56.041026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.051 [2024-11-06 09:00:56.041033] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.051 [2024-11-06 09:00:56.041039] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.051 [2024-11-06 09:00:56.051271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.051 qpair failed and we were unable to recover it. 00:24:33.051 [2024-11-06 09:00:56.061083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.051 [2024-11-06 09:00:56.061125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.051 [2024-11-06 09:00:56.061141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.051 [2024-11-06 09:00:56.061148] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.051 [2024-11-06 09:00:56.061155] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.310 [2024-11-06 09:00:56.071455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.310 qpair failed and we were unable to recover it. 00:24:33.310 [2024-11-06 09:00:56.081055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.310 [2024-11-06 09:00:56.081092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.310 [2024-11-06 09:00:56.081108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.310 [2024-11-06 09:00:56.081115] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.310 [2024-11-06 09:00:56.081121] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.310 [2024-11-06 09:00:56.091473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.310 qpair failed and we were unable to recover it. 00:24:33.310 [2024-11-06 09:00:56.101154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.310 [2024-11-06 09:00:56.101196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.310 [2024-11-06 09:00:56.101221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.310 [2024-11-06 09:00:56.101228] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.310 [2024-11-06 09:00:56.101235] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.310 [2024-11-06 09:00:56.111535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.310 qpair failed and we were unable to recover it. 00:24:33.310 [2024-11-06 09:00:56.121290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.310 [2024-11-06 09:00:56.121335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.310 [2024-11-06 09:00:56.121351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.310 [2024-11-06 09:00:56.121358] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.310 [2024-11-06 09:00:56.121364] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.310 [2024-11-06 09:00:56.131631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.310 qpair failed and we were unable to recover it. 00:24:33.310 [2024-11-06 09:00:56.141323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.310 [2024-11-06 09:00:56.141364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.310 [2024-11-06 09:00:56.141380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.310 [2024-11-06 09:00:56.141387] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.310 [2024-11-06 09:00:56.141394] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.310 [2024-11-06 09:00:56.151669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.310 qpair failed and we were unable to recover it. 00:24:33.310 [2024-11-06 09:00:56.161331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.310 [2024-11-06 09:00:56.161373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.310 [2024-11-06 09:00:56.161388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.310 [2024-11-06 09:00:56.161396] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.311 [2024-11-06 09:00:56.161402] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.311 [2024-11-06 09:00:56.171720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.311 qpair failed and we were unable to recover it. 00:24:33.311 [2024-11-06 09:00:56.181521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.311 [2024-11-06 09:00:56.181563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.311 [2024-11-06 09:00:56.181579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.311 [2024-11-06 09:00:56.181590] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.311 [2024-11-06 09:00:56.181597] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.311 [2024-11-06 09:00:56.191875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.311 qpair failed and we were unable to recover it. 00:24:33.311 [2024-11-06 09:00:56.201462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.311 [2024-11-06 09:00:56.201503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.311 [2024-11-06 09:00:56.201520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.311 [2024-11-06 09:00:56.201527] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.311 [2024-11-06 09:00:56.201533] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.311 [2024-11-06 09:00:56.211798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.311 qpair failed and we were unable to recover it. 00:24:33.311 [2024-11-06 09:00:56.221628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.311 [2024-11-06 09:00:56.221668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.311 [2024-11-06 09:00:56.221684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.311 [2024-11-06 09:00:56.221691] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.311 [2024-11-06 09:00:56.221698] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.311 [2024-11-06 09:00:56.231754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.311 qpair failed and we were unable to recover it. 00:24:33.311 [2024-11-06 09:00:56.241596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.311 [2024-11-06 09:00:56.241634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.311 [2024-11-06 09:00:56.241650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.311 [2024-11-06 09:00:56.241657] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.311 [2024-11-06 09:00:56.241663] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.311 [2024-11-06 09:00:56.252008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.311 qpair failed and we were unable to recover it. 00:24:33.311 [2024-11-06 09:00:56.261690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.311 [2024-11-06 09:00:56.261730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.311 [2024-11-06 09:00:56.261745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.311 [2024-11-06 09:00:56.261752] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.311 [2024-11-06 09:00:56.261759] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.311 [2024-11-06 09:00:56.271968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.311 qpair failed and we were unable to recover it. 00:24:33.311 [2024-11-06 09:00:56.281797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.311 [2024-11-06 09:00:56.281840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.311 [2024-11-06 09:00:56.281856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.311 [2024-11-06 09:00:56.281863] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.311 [2024-11-06 09:00:56.281870] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.311 [2024-11-06 09:00:56.292121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.311 qpair failed and we were unable to recover it. 00:24:33.311 [2024-11-06 09:00:56.301784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.311 [2024-11-06 09:00:56.301823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.311 [2024-11-06 09:00:56.301838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.311 [2024-11-06 09:00:56.301846] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.311 [2024-11-06 09:00:56.301852] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.311 [2024-11-06 09:00:56.312172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.311 qpair failed and we were unable to recover it. 00:24:33.311 [2024-11-06 09:00:56.321805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.311 [2024-11-06 09:00:56.321847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.311 [2024-11-06 09:00:56.321862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.311 [2024-11-06 09:00:56.321869] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.311 [2024-11-06 09:00:56.321876] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.570 [2024-11-06 09:00:56.332168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.570 qpair failed and we were unable to recover it. 00:24:33.570 [2024-11-06 09:00:56.341872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.570 [2024-11-06 09:00:56.341915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.570 [2024-11-06 09:00:56.341930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.570 [2024-11-06 09:00:56.341938] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.570 [2024-11-06 09:00:56.341944] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.570 [2024-11-06 09:00:56.352097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.570 qpair failed and we were unable to recover it. 00:24:33.570 [2024-11-06 09:00:56.361911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.570 [2024-11-06 09:00:56.361956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.570 [2024-11-06 09:00:56.361972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.570 [2024-11-06 09:00:56.361979] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.570 [2024-11-06 09:00:56.361986] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.372107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.381971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.382011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.382027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.382035] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.382042] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.392263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.402030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.402073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.402089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.402096] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.402102] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.412223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.422091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.422131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.422147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.422154] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.422161] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.432485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.442177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.442218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.442239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.442247] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.442253] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.452605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.462226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.462265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.462281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.462289] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.462296] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.472490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.482309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.482346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.482361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.482368] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.482375] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.492756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.502381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.502421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.502437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.502445] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.502451] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.512827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.522424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.522464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.522479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.522490] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.522497] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.532728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.542532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.542573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.542590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.542597] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.542604] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.552849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.562508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.562546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.562562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.562570] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.562577] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.571 [2024-11-06 09:00:56.572758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.571 qpair failed and we were unable to recover it. 00:24:33.571 [2024-11-06 09:00:56.582525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.571 [2024-11-06 09:00:56.582566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.571 [2024-11-06 09:00:56.582582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.571 [2024-11-06 09:00:56.582590] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.571 [2024-11-06 09:00:56.582596] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:33.830 [2024-11-06 09:00:56.593106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.830 qpair failed and we were unable to recover it. 00:24:34.765 Write completed with error (sct=0, sc=8) 00:24:34.765 starting I/O failed 00:24:34.765 Write completed with error (sct=0, sc=8) 00:24:34.765 starting I/O failed 00:24:34.765 Write completed with error (sct=0, sc=8) 00:24:34.765 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Read completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 Write completed with error (sct=0, sc=8) 00:24:34.766 starting I/O failed 00:24:34.766 [2024-11-06 09:00:57.598270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.766 [2024-11-06 09:00:57.605406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.766 [2024-11-06 09:00:57.605456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.766 [2024-11-06 09:00:57.605474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.766 [2024-11-06 09:00:57.605482] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.766 [2024-11-06 09:00:57.605488] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:24:34.766 [2024-11-06 09:00:57.615784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.766 qpair failed and we were unable to recover it. 00:24:34.766 [2024-11-06 09:00:57.625754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.766 [2024-11-06 09:00:57.625800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.766 [2024-11-06 09:00:57.625817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.766 [2024-11-06 09:00:57.625824] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.766 [2024-11-06 09:00:57.625831] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:24:34.766 [2024-11-06 09:00:57.635953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.766 qpair failed and we were unable to recover it. 00:24:34.766 [2024-11-06 09:00:57.636023] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:24:34.766 A controller has encountered a failure and is being reset. 00:24:34.766 [2024-11-06 09:00:57.636127] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:34.766 [2024-11-06 09:00:57.668347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:24:34.766 Controller properly reset. 00:24:34.766 Initializing NVMe Controllers 00:24:34.766 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.766 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.766 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:34.766 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:34.766 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:34.766 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:34.766 Initialization complete. Launching workers. 00:24:34.766 Starting thread on core 1 00:24:34.766 Starting thread on core 2 00:24:34.766 Starting thread on core 3 00:24:34.766 Starting thread on core 0 00:24:34.766 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:34.766 00:24:34.766 real 0m11.944s 00:24:34.766 user 0m25.420s 00:24:34.766 sys 0m2.261s 00:24:34.766 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:34.766 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.766 ************************************ 00:24:34.766 END TEST nvmf_target_disconnect_tc2 00:24:34.766 ************************************ 00:24:35.025 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:24:35.025 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:24:35.025 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:35.025 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:35.025 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:35.025 ************************************ 00:24:35.025 START TEST nvmf_target_disconnect_tc3 00:24:35.025 ************************************ 00:24:35.025 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:24:35.025 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=550364 00:24:35.025 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:24:35.025 09:00:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:24:36.927 09:00:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 549131 00:24:36.927 09:00:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Read completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 Write completed with error (sct=0, sc=8) 00:24:38.302 starting I/O failed 00:24:38.302 [2024-11-06 09:01:01.012871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:38.870 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 549131 Killed "${NVMF_APP[@]}" "$@" 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # nvmfpid=551052 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # waitforlisten 551052 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 551052 ']' 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.870 09:01:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.130 [2024-11-06 09:01:01.890743] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:24:39.130 [2024-11-06 09:01:01.890791] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.130 [2024-11-06 09:01:01.968669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.130 [2024-11-06 09:01:02.008087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.130 [2024-11-06 09:01:02.008124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.130 [2024-11-06 09:01:02.008132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.130 [2024-11-06 09:01:02.008138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.130 [2024-11-06 09:01:02.008143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.130 [2024-11-06 09:01:02.009716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:39.130 [2024-11-06 09:01:02.009826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:39.130 [2024-11-06 09:01:02.009933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:39.130 [2024-11-06 09:01:02.009934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Read completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 Write completed with error (sct=0, sc=8) 00:24:39.130 starting I/O failed 00:24:39.130 [2024-11-06 09:01:02.017904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:39.130 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.130 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:39.130 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:39.130 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.130 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.389 Malloc0 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.389 [2024-11-06 09:01:02.200867] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x168bbe0/0x1697b40) succeed. 00:24:39.389 [2024-11-06 09:01:02.210374] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x168d270/0x16d91e0) succeed. 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.389 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.390 [2024-11-06 09:01:02.352634] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.390 09:01:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 550364 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Write completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 Read completed with error (sct=0, sc=8) 00:24:40.328 starting I/O failed 00:24:40.328 [2024-11-06 09:01:03.023020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.328 [2024-11-06 09:01:03.024624] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:40.328 [2024-11-06 09:01:03.024644] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:40.328 [2024-11-06 09:01:03.024652] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:41.265 [2024-11-06 09:01:04.028559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.265 qpair failed and we were unable to recover it. 00:24:41.265 [2024-11-06 09:01:04.030053] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:41.265 [2024-11-06 09:01:04.030070] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:41.265 [2024-11-06 09:01:04.030077] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:42.202 [2024-11-06 09:01:05.033966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:42.202 qpair failed and we were unable to recover it. 00:24:42.202 [2024-11-06 09:01:05.035467] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:42.202 [2024-11-06 09:01:05.035485] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:42.202 [2024-11-06 09:01:05.035491] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:43.144 [2024-11-06 09:01:06.039290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:43.144 qpair failed and we were unable to recover it. 00:24:43.144 [2024-11-06 09:01:06.040657] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:43.144 [2024-11-06 09:01:06.040675] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:43.144 [2024-11-06 09:01:06.040681] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:44.084 [2024-11-06 09:01:07.044599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:44.084 qpair failed and we were unable to recover it. 00:24:44.084 [2024-11-06 09:01:07.045939] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:44.084 [2024-11-06 09:01:07.045960] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:44.084 [2024-11-06 09:01:07.045966] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:45.463 [2024-11-06 09:01:08.049857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:45.463 qpair failed and we were unable to recover it. 00:24:45.463 [2024-11-06 09:01:08.051296] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.463 [2024-11-06 09:01:08.051313] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.463 [2024-11-06 09:01:08.051319] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:46.403 [2024-11-06 09:01:09.055135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:46.403 qpair failed and we were unable to recover it. 00:24:46.403 [2024-11-06 09:01:09.056554] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:46.403 [2024-11-06 09:01:09.056571] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:46.403 [2024-11-06 09:01:09.056578] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:24:47.343 [2024-11-06 09:01:10.060430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:24:47.343 qpair failed and we were unable to recover it. 00:24:47.343 [2024-11-06 09:01:10.062358] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:47.343 [2024-11-06 09:01:10.062413] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:47.343 [2024-11-06 09:01:10.062435] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:24:48.284 [2024-11-06 09:01:11.066313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-11-06 09:01:11.067767] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:48.284 [2024-11-06 09:01:11.067783] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:48.284 [2024-11-06 09:01:11.067789] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:24:49.226 [2024-11-06 09:01:12.071601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-11-06 09:01:12.071704] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:24:49.226 A controller has encountered a failure and is being reset. 00:24:49.226 Resorting to new failover address 192.168.100.9 00:24:49.226 [2024-11-06 09:01:12.071790] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:49.226 [2024-11-06 09:01:12.071850] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:49.226 [2024-11-06 09:01:12.073833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:49.226 Controller properly reset. 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Write completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 Read completed with error (sct=0, sc=8) 00:24:50.167 starting I/O failed 00:24:50.167 [2024-11-06 09:01:13.133254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:50.167 Initializing NVMe Controllers 00:24:50.167 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.167 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.167 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:50.167 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:50.167 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:50.167 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:50.167 Initialization complete. Launching workers. 00:24:50.167 Starting thread on core 1 00:24:50.167 Starting thread on core 2 00:24:50.167 Starting thread on core 3 00:24:50.167 Starting thread on core 0 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:24:50.428 00:24:50.428 real 0m15.359s 00:24:50.428 user 0m57.377s 00:24:50.428 sys 0m3.617s 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:50.428 ************************************ 00:24:50.428 END TEST nvmf_target_disconnect_tc3 00:24:50.428 ************************************ 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:50.428 rmmod nvme_rdma 00:24:50.428 rmmod nvme_fabrics 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 551052 ']' 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 551052 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 551052 ']' 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 551052 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 551052 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 551052' 00:24:50.428 killing process with pid 551052 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 551052 00:24:50.428 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 551052 00:24:50.688 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:50.688 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:24:50.688 00:24:50.688 real 0m35.028s 00:24:50.688 user 2m11.432s 00:24:50.688 sys 0m10.861s 00:24:50.688 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:50.688 09:01:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:50.688 ************************************ 00:24:50.688 END TEST nvmf_target_disconnect 00:24:50.688 ************************************ 00:24:50.688 09:01:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:50.688 00:24:50.688 real 5m7.881s 00:24:50.688 user 12m23.715s 00:24:50.688 sys 1m24.113s 00:24:50.688 09:01:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:50.688 09:01:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.688 ************************************ 00:24:50.688 END TEST nvmf_host 00:24:50.688 ************************************ 00:24:50.688 09:01:13 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:24:50.688 00:24:50.688 real 16m12.088s 00:24:50.688 user 40m25.270s 00:24:50.688 sys 4m35.377s 00:24:50.688 09:01:13 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:50.688 09:01:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:50.688 ************************************ 00:24:50.688 END TEST nvmf_rdma 00:24:50.688 ************************************ 00:24:50.688 09:01:13 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:50.688 09:01:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:50.688 09:01:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:50.688 09:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:50.949 ************************************ 00:24:50.949 START TEST spdkcli_nvmf_rdma 00:24:50.949 ************************************ 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:50.949 * Looking for test storage... 00:24:50.949 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1689 -- # lcov --version 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.949 --rc genhtml_branch_coverage=1 00:24:50.949 --rc genhtml_function_coverage=1 00:24:50.949 --rc genhtml_legend=1 00:24:50.949 --rc geninfo_all_blocks=1 00:24:50.949 --rc geninfo_unexecuted_blocks=1 00:24:50.949 00:24:50.949 ' 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.949 --rc genhtml_branch_coverage=1 00:24:50.949 --rc genhtml_function_coverage=1 00:24:50.949 --rc genhtml_legend=1 00:24:50.949 --rc geninfo_all_blocks=1 00:24:50.949 --rc geninfo_unexecuted_blocks=1 00:24:50.949 00:24:50.949 ' 00:24:50.949 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:50.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.950 --rc genhtml_branch_coverage=1 00:24:50.950 --rc genhtml_function_coverage=1 00:24:50.950 --rc genhtml_legend=1 00:24:50.950 --rc geninfo_all_blocks=1 00:24:50.950 --rc geninfo_unexecuted_blocks=1 00:24:50.950 00:24:50.950 ' 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:50.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.950 --rc genhtml_branch_coverage=1 00:24:50.950 --rc genhtml_function_coverage=1 00:24:50.950 --rc genhtml_legend=1 00:24:50.950 --rc geninfo_all_blocks=1 00:24:50.950 --rc geninfo_unexecuted_blocks=1 00:24:50.950 00:24:50.950 ' 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.950 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=553021 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 553021 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 553021 ']' 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.950 09:01:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:51.210 [2024-11-06 09:01:13.985322] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:24:51.210 [2024-11-06 09:01:13.985368] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid553021 ] 00:24:51.210 [2024-11-06 09:01:14.060023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:51.210 [2024-11-06 09:01:14.102943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.210 [2024-11-06 09:01:14.102944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.210 09:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.210 09:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:24:51.210 09:01:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:51.210 09:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:51.210 09:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.470 09:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:56.753 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:56.753 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:56.753 Found net devices under 0000:da:00.0: mlx_0_0 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:56.753 Found net devices under 0000:da:00.1: mlx_0_1 00:24:56.753 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # is_hw=yes 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # rdma_device_init 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@528 -- # allocate_nic_ips 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:56.754 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:56.754 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:24:56.754 altname enp218s0f0np0 00:24:56.754 altname ens818f0np0 00:24:56.754 inet 192.168.100.8/24 scope global mlx_0_0 00:24:56.754 valid_lft forever preferred_lft forever 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:56.754 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:56.754 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:24:56.754 altname enp218s0f1np1 00:24:56.754 altname ens818f1np1 00:24:56.754 inet 192.168.100.9/24 scope global mlx_0_1 00:24:56.754 valid_lft forever preferred_lft forever 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # return 0 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:56.754 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:24:57.015 192.168.100.9' 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:24:57.015 192.168.100.9' 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # head -n 1 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:24:57.015 192.168.100.9' 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # tail -n +2 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # head -n 1 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:57.015 09:01:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:57.015 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:57.015 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:57.015 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:57.015 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:57.015 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:57.015 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:57.015 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:57.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:57.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:57.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:57.015 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:57.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:57.015 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:57.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:57.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:57.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:57.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:57.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:57.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:57.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:24:57.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:57.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:57.016 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:57.016 ' 00:25:00.310 [2024-11-06 09:01:22.594513] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c47510/0x1c55150) succeed. 00:25:00.310 [2024-11-06 09:01:22.603754] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c48bf0/0x1cd51c0) succeed. 00:25:01.249 [2024-11-06 09:01:23.993526] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:25:03.787 [2024-11-06 09:01:26.477369] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:25:05.697 [2024-11-06 09:01:28.636409] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:25:07.608 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:07.608 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:07.608 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:07.608 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:07.608 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:07.608 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:07.608 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:07.608 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:07.608 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:07.608 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:07.608 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:07.608 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:07.608 09:01:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:07.608 09:01:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.608 09:01:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:07.608 09:01:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:07.608 09:01:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:07.608 09:01:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:07.608 09:01:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:25:07.608 09:01:30 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:07.868 09:01:30 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:08.129 09:01:30 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:08.129 09:01:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:08.129 09:01:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.129 09:01:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:08.129 09:01:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:08.129 09:01:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:08.129 09:01:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:08.129 09:01:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:08.129 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:08.129 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:08.129 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:08.129 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:25:08.129 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:25:08.129 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:08.129 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:08.129 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:08.129 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:08.129 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:08.129 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:08.129 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:08.129 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:08.129 ' 00:25:13.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:13.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:13.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:13.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:13.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:25:13.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:25:13.414 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:13.414 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:13.414 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:13.414 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:13.414 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:13.414 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:13.414 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:13.414 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 553021 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 553021 ']' 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 553021 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 553021 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 553021' 00:25:13.674 killing process with pid 553021 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 553021 00:25:13.674 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 553021 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:13.933 rmmod nvme_rdma 00:25:13.933 rmmod nvme_fabrics 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:25:13.933 00:25:13.933 real 0m23.163s 00:25:13.933 user 0m51.219s 00:25:13.933 sys 0m5.057s 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:13.933 09:01:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:13.933 ************************************ 00:25:13.933 END TEST spdkcli_nvmf_rdma 00:25:13.933 ************************************ 00:25:13.933 09:01:36 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:13.933 09:01:36 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:13.933 09:01:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:13.933 09:01:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:13.933 09:01:36 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:13.933 09:01:36 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:13.933 09:01:36 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:13.933 09:01:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:13.933 09:01:36 -- common/autotest_common.sh@10 -- # set +x 00:25:13.933 09:01:36 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:13.933 09:01:36 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:13.933 09:01:36 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:13.933 09:01:36 -- common/autotest_common.sh@10 -- # set +x 00:25:19.217 INFO: APP EXITING 00:25:19.217 INFO: killing all VMs 00:25:19.217 INFO: killing vhost app 00:25:19.217 INFO: EXIT DONE 00:25:21.850 Waiting for block devices as requested 00:25:21.850 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:21.850 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:21.850 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:21.850 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:21.850 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:21.850 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:21.850 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:22.125 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:22.125 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:22.125 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:22.125 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:22.384 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:22.384 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:22.384 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:22.644 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:22.644 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:22.644 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:25.940 Cleaning 00:25:25.940 Removing: /var/run/dpdk/spdk0/config 00:25:25.940 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:25.940 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:25.940 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:25.940 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:25.940 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:25.940 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:25.940 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:25.940 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:25.940 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:25.940 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:25.940 Removing: /var/run/dpdk/spdk1/config 00:25:25.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:25.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:25.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:25.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:25.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:25.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:25.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:25.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:25.940 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:25.940 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:25.940 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:25.940 Removing: /var/run/dpdk/spdk2/config 00:25:25.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:25.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:25.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:25.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:25.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:25.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:25.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:25.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:25.940 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:25.940 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:25.940 Removing: /var/run/dpdk/spdk3/config 00:25:25.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:25.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:25.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:25.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:25.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:25.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:25.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:25.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:25.940 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:25.940 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:25.940 Removing: /var/run/dpdk/spdk4/config 00:25:25.940 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:25.940 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:25.940 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:25.940 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:25.940 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:25.940 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:25.940 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:25.940 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:25.940 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:25.940 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:25.940 Removing: /dev/shm/bdevperf_trace.pid317595 00:25:25.940 Removing: /dev/shm/bdev_svc_trace.1 00:25:25.940 Removing: /dev/shm/nvmf_trace.0 00:25:25.940 Removing: /dev/shm/spdk_tgt_trace.pid275562 00:25:25.940 Removing: /var/run/dpdk/spdk0 00:25:25.940 Removing: /var/run/dpdk/spdk1 00:25:25.940 Removing: /var/run/dpdk/spdk2 00:25:25.940 Removing: /var/run/dpdk/spdk3 00:25:25.940 Removing: /var/run/dpdk/spdk4 00:25:25.940 Removing: /var/run/dpdk/spdk_pid273166 00:25:25.940 Removing: /var/run/dpdk/spdk_pid274354 00:25:25.940 Removing: /var/run/dpdk/spdk_pid275562 00:25:25.940 Removing: /var/run/dpdk/spdk_pid276197 00:25:25.940 Removing: /var/run/dpdk/spdk_pid277150 00:25:25.940 Removing: /var/run/dpdk/spdk_pid277252 00:25:25.940 Removing: /var/run/dpdk/spdk_pid278272 00:25:25.940 Removing: /var/run/dpdk/spdk_pid278366 00:25:25.940 Removing: /var/run/dpdk/spdk_pid278714 00:25:25.940 Removing: /var/run/dpdk/spdk_pid283471 00:25:25.940 Removing: /var/run/dpdk/spdk_pid284782 00:25:25.940 Removing: /var/run/dpdk/spdk_pid285087 00:25:25.940 Removing: /var/run/dpdk/spdk_pid285359 00:25:25.940 Removing: /var/run/dpdk/spdk_pid285667 00:25:25.940 Removing: /var/run/dpdk/spdk_pid285959 00:25:25.940 Removing: /var/run/dpdk/spdk_pid286212 00:25:25.940 Removing: /var/run/dpdk/spdk_pid286460 00:25:25.940 Removing: /var/run/dpdk/spdk_pid286740 00:25:25.940 Removing: /var/run/dpdk/spdk_pid287491 00:25:25.940 Removing: /var/run/dpdk/spdk_pid290486 00:25:25.940 Removing: /var/run/dpdk/spdk_pid290743 00:25:25.940 Removing: /var/run/dpdk/spdk_pid290999 00:25:25.940 Removing: /var/run/dpdk/spdk_pid291013 00:25:25.940 Removing: /var/run/dpdk/spdk_pid291501 00:25:25.940 Removing: /var/run/dpdk/spdk_pid291661 00:25:25.940 Removing: /var/run/dpdk/spdk_pid292017 00:25:25.940 Removing: /var/run/dpdk/spdk_pid292231 00:25:25.940 Removing: /var/run/dpdk/spdk_pid292489 00:25:25.940 Removing: /var/run/dpdk/spdk_pid292508 00:25:25.940 Removing: /var/run/dpdk/spdk_pid292764 00:25:25.940 Removing: /var/run/dpdk/spdk_pid292899 00:25:25.940 Removing: /var/run/dpdk/spdk_pid293339 00:25:25.940 Removing: /var/run/dpdk/spdk_pid293592 00:25:25.940 Removing: /var/run/dpdk/spdk_pid293892 00:25:25.940 Removing: /var/run/dpdk/spdk_pid297913 00:25:25.940 Removing: /var/run/dpdk/spdk_pid302141 00:25:25.940 Removing: /var/run/dpdk/spdk_pid312166 00:25:25.940 Removing: /var/run/dpdk/spdk_pid313082 00:25:25.940 Removing: /var/run/dpdk/spdk_pid317595 00:25:25.940 Removing: /var/run/dpdk/spdk_pid317840 00:25:25.940 Removing: /var/run/dpdk/spdk_pid321866 00:25:25.940 Removing: /var/run/dpdk/spdk_pid327517 00:25:25.940 Removing: /var/run/dpdk/spdk_pid330123 00:25:25.940 Removing: /var/run/dpdk/spdk_pid339641 00:25:25.940 Removing: /var/run/dpdk/spdk_pid363656 00:25:25.940 Removing: /var/run/dpdk/spdk_pid367267 00:25:25.940 Removing: /var/run/dpdk/spdk_pid408215 00:25:25.940 Removing: /var/run/dpdk/spdk_pid413327 00:25:25.940 Removing: /var/run/dpdk/spdk_pid418657 00:25:25.940 Removing: /var/run/dpdk/spdk_pid427106 00:25:25.940 Removing: /var/run/dpdk/spdk_pid466740 00:25:25.940 Removing: /var/run/dpdk/spdk_pid467619 00:25:25.940 Removing: /var/run/dpdk/spdk_pid468661 00:25:25.941 Removing: /var/run/dpdk/spdk_pid469741 00:25:25.941 Removing: /var/run/dpdk/spdk_pid474339 00:25:25.941 Removing: /var/run/dpdk/spdk_pid480973 00:25:25.941 Removing: /var/run/dpdk/spdk_pid481900 00:25:25.941 Removing: /var/run/dpdk/spdk_pid482811 00:25:25.941 Removing: /var/run/dpdk/spdk_pid483730 00:25:25.941 Removing: /var/run/dpdk/spdk_pid484185 00:25:25.941 Removing: /var/run/dpdk/spdk_pid488326 00:25:25.941 Removing: /var/run/dpdk/spdk_pid488418 00:25:25.941 Removing: /var/run/dpdk/spdk_pid493190 00:25:25.941 Removing: /var/run/dpdk/spdk_pid493656 00:25:25.941 Removing: /var/run/dpdk/spdk_pid494355 00:25:25.941 Removing: /var/run/dpdk/spdk_pid495045 00:25:25.941 Removing: /var/run/dpdk/spdk_pid495055 00:25:25.941 Removing: /var/run/dpdk/spdk_pid499545 00:25:25.941 Removing: /var/run/dpdk/spdk_pid500114 00:25:25.941 Removing: /var/run/dpdk/spdk_pid504217 00:25:25.941 Removing: /var/run/dpdk/spdk_pid506846 00:25:25.941 Removing: /var/run/dpdk/spdk_pid512213 00:25:25.941 Removing: /var/run/dpdk/spdk_pid522146 00:25:25.941 Removing: /var/run/dpdk/spdk_pid522148 00:25:25.941 Removing: /var/run/dpdk/spdk_pid542134 00:25:26.201 Removing: /var/run/dpdk/spdk_pid542371 00:25:26.201 Removing: /var/run/dpdk/spdk_pid548125 00:25:26.201 Removing: /var/run/dpdk/spdk_pid548480 00:25:26.201 Removing: /var/run/dpdk/spdk_pid550364 00:25:26.201 Removing: /var/run/dpdk/spdk_pid553021 00:25:26.201 Clean 00:25:26.201 09:01:49 -- common/autotest_common.sh@1449 -- # return 0 00:25:26.201 09:01:49 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:26.201 09:01:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.201 09:01:49 -- common/autotest_common.sh@10 -- # set +x 00:25:26.201 09:01:49 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:26.201 09:01:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.201 09:01:49 -- common/autotest_common.sh@10 -- # set +x 00:25:26.201 09:01:49 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:26.201 09:01:49 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:25:26.201 09:01:49 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:25:26.201 09:01:49 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:26.201 09:01:49 -- spdk/autotest.sh@394 -- # hostname 00:25:26.201 09:01:49 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:25:26.461 geninfo: WARNING: invalid characters removed from testname! 00:25:48.415 09:02:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:48.415 09:02:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:49.796 09:02:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:51.177 09:02:14 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:53.084 09:02:15 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:54.990 09:02:17 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:56.898 09:02:19 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:56.898 09:02:19 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:25:56.898 09:02:19 -- common/autotest_common.sh@1689 -- $ lcov --version 00:25:56.898 09:02:19 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:25:56.898 09:02:19 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:25:56.898 09:02:19 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:25:56.898 09:02:19 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:25:56.898 09:02:19 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:25:56.898 09:02:19 -- scripts/common.sh@336 -- $ IFS=.-: 00:25:56.898 09:02:19 -- scripts/common.sh@336 -- $ read -ra ver1 00:25:56.898 09:02:19 -- scripts/common.sh@337 -- $ IFS=.-: 00:25:56.898 09:02:19 -- scripts/common.sh@337 -- $ read -ra ver2 00:25:56.898 09:02:19 -- scripts/common.sh@338 -- $ local 'op=<' 00:25:56.898 09:02:19 -- scripts/common.sh@340 -- $ ver1_l=2 00:25:56.898 09:02:19 -- scripts/common.sh@341 -- $ ver2_l=1 00:25:56.898 09:02:19 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:25:56.898 09:02:19 -- scripts/common.sh@344 -- $ case "$op" in 00:25:56.898 09:02:19 -- scripts/common.sh@345 -- $ : 1 00:25:56.898 09:02:19 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:25:56.899 09:02:19 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.899 09:02:19 -- scripts/common.sh@365 -- $ decimal 1 00:25:56.899 09:02:19 -- scripts/common.sh@353 -- $ local d=1 00:25:56.899 09:02:19 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:25:56.899 09:02:19 -- scripts/common.sh@355 -- $ echo 1 00:25:56.899 09:02:19 -- scripts/common.sh@365 -- $ ver1[v]=1 00:25:56.899 09:02:19 -- scripts/common.sh@366 -- $ decimal 2 00:25:56.899 09:02:19 -- scripts/common.sh@353 -- $ local d=2 00:25:56.899 09:02:19 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:25:56.899 09:02:19 -- scripts/common.sh@355 -- $ echo 2 00:25:56.899 09:02:19 -- scripts/common.sh@366 -- $ ver2[v]=2 00:25:56.899 09:02:19 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:25:56.899 09:02:19 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:25:56.899 09:02:19 -- scripts/common.sh@368 -- $ return 0 00:25:56.899 09:02:19 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.899 09:02:19 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:25:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.899 --rc genhtml_branch_coverage=1 00:25:56.899 --rc genhtml_function_coverage=1 00:25:56.899 --rc genhtml_legend=1 00:25:56.899 --rc geninfo_all_blocks=1 00:25:56.899 --rc geninfo_unexecuted_blocks=1 00:25:56.899 00:25:56.899 ' 00:25:56.899 09:02:19 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:25:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.899 --rc genhtml_branch_coverage=1 00:25:56.899 --rc genhtml_function_coverage=1 00:25:56.899 --rc genhtml_legend=1 00:25:56.899 --rc geninfo_all_blocks=1 00:25:56.899 --rc geninfo_unexecuted_blocks=1 00:25:56.899 00:25:56.899 ' 00:25:56.899 09:02:19 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:25:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.899 --rc genhtml_branch_coverage=1 00:25:56.899 --rc genhtml_function_coverage=1 00:25:56.899 --rc genhtml_legend=1 00:25:56.899 --rc geninfo_all_blocks=1 00:25:56.899 --rc geninfo_unexecuted_blocks=1 00:25:56.899 00:25:56.899 ' 00:25:56.899 09:02:19 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:25:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.899 --rc genhtml_branch_coverage=1 00:25:56.899 --rc genhtml_function_coverage=1 00:25:56.899 --rc genhtml_legend=1 00:25:56.899 --rc geninfo_all_blocks=1 00:25:56.899 --rc geninfo_unexecuted_blocks=1 00:25:56.899 00:25:56.899 ' 00:25:56.899 09:02:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:56.899 09:02:19 -- scripts/common.sh@15 -- $ shopt -s extglob 00:25:56.899 09:02:19 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:56.899 09:02:19 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.899 09:02:19 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.899 09:02:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.899 09:02:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.899 09:02:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.899 09:02:19 -- paths/export.sh@5 -- $ export PATH 00:25:56.899 09:02:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.899 09:02:19 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:25:56.899 09:02:19 -- common/autobuild_common.sh@486 -- $ date +%s 00:25:56.899 09:02:19 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730880139.XXXXXX 00:25:56.899 09:02:19 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730880139.QQRsf1 00:25:56.899 09:02:19 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:25:56.899 09:02:19 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:25:56.899 09:02:19 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:25:56.899 09:02:19 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:56.899 09:02:19 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:56.899 09:02:19 -- common/autobuild_common.sh@502 -- $ get_config_params 00:25:56.899 09:02:19 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:25:56.899 09:02:19 -- common/autotest_common.sh@10 -- $ set +x 00:25:56.899 09:02:19 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:25:56.899 09:02:19 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:25:56.899 09:02:19 -- pm/common@17 -- $ local monitor 00:25:56.899 09:02:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:56.899 09:02:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:56.899 09:02:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:56.899 09:02:19 -- pm/common@21 -- $ date +%s 00:25:56.899 09:02:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:56.899 09:02:19 -- pm/common@21 -- $ date +%s 00:25:56.899 09:02:19 -- pm/common@25 -- $ sleep 1 00:25:56.899 09:02:19 -- pm/common@21 -- $ date +%s 00:25:56.899 09:02:19 -- pm/common@21 -- $ date +%s 00:25:56.899 09:02:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730880139 00:25:56.899 09:02:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730880139 00:25:56.899 09:02:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730880139 00:25:56.899 09:02:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730880139 00:25:56.899 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730880139_collect-cpu-load.pm.log 00:25:56.899 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730880139_collect-vmstat.pm.log 00:25:56.899 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730880139_collect-cpu-temp.pm.log 00:25:56.899 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730880139_collect-bmc-pm.bmc.pm.log 00:25:57.836 09:02:20 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:25:57.836 09:02:20 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:25:57.836 09:02:20 -- spdk/autopackage.sh@14 -- $ timing_finish 00:25:57.836 09:02:20 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:57.836 09:02:20 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:57.836 09:02:20 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:57.836 09:02:20 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:57.836 09:02:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:57.836 09:02:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:57.836 09:02:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:57.836 09:02:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:57.836 09:02:20 -- pm/common@44 -- $ pid=567314 00:25:57.836 09:02:20 -- pm/common@50 -- $ kill -TERM 567314 00:25:57.836 09:02:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:57.836 09:02:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:57.836 09:02:20 -- pm/common@44 -- $ pid=567316 00:25:57.836 09:02:20 -- pm/common@50 -- $ kill -TERM 567316 00:25:57.836 09:02:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:57.836 09:02:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:57.836 09:02:20 -- pm/common@44 -- $ pid=567318 00:25:57.836 09:02:20 -- pm/common@50 -- $ kill -TERM 567318 00:25:57.837 09:02:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:57.837 09:02:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:57.837 09:02:20 -- pm/common@44 -- $ pid=567342 00:25:57.837 09:02:20 -- pm/common@50 -- $ sudo -E kill -TERM 567342 00:25:57.837 + [[ -n 196192 ]] 00:25:57.837 + sudo kill 196192 00:25:57.848 [Pipeline] } 00:25:57.865 [Pipeline] // stage 00:25:57.870 [Pipeline] } 00:25:57.883 [Pipeline] // timeout 00:25:57.888 [Pipeline] } 00:25:57.901 [Pipeline] // catchError 00:25:57.905 [Pipeline] } 00:25:57.919 [Pipeline] // wrap 00:25:57.924 [Pipeline] } 00:25:57.935 [Pipeline] // catchError 00:25:57.944 [Pipeline] stage 00:25:57.945 [Pipeline] { (Epilogue) 00:25:57.957 [Pipeline] catchError 00:25:57.958 [Pipeline] { 00:25:57.971 [Pipeline] echo 00:25:57.972 Cleanup processes 00:25:57.978 [Pipeline] sh 00:25:58.268 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:58.268 567485 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:25:58.268 567813 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:58.282 [Pipeline] sh 00:25:58.567 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:58.567 ++ grep -v 'sudo pgrep' 00:25:58.567 ++ awk '{print $1}' 00:25:58.567 + sudo kill -9 567485 00:25:58.578 [Pipeline] sh 00:25:58.864 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:06.989 [Pipeline] sh 00:26:07.275 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:07.275 Artifacts sizes are good 00:26:07.288 [Pipeline] archiveArtifacts 00:26:07.295 Archiving artifacts 00:26:07.708 [Pipeline] sh 00:26:08.023 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:26:08.036 [Pipeline] cleanWs 00:26:08.046 [WS-CLEANUP] Deleting project workspace... 00:26:08.046 [WS-CLEANUP] Deferred wipeout is used... 00:26:08.053 [WS-CLEANUP] done 00:26:08.055 [Pipeline] } 00:26:08.070 [Pipeline] // catchError 00:26:08.083 [Pipeline] sh 00:26:08.368 + logger -p user.info -t JENKINS-CI 00:26:08.378 [Pipeline] } 00:26:08.390 [Pipeline] // stage 00:26:08.395 [Pipeline] } 00:26:08.409 [Pipeline] // node 00:26:08.414 [Pipeline] End of Pipeline 00:26:08.459 Finished: SUCCESS